url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.28B
node_id
stringlengths
18
32
number
int64
1
4.53k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
int64
1,587B
1,656B
updated_at
int64
1,587B
1,656B
closed_at
int64
1,587B
1,656B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
1 value
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1874
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1874/comments
https://api.github.com/repos/huggingface/datasets/issues/1874/events
https://github.com/huggingface/datasets/pull/1874
807,786,094
MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy
1,874
Adding Europarl Bilingual dataset
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.", "I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos", "I...
1,613,235,724,000
1,614,854,302,000
1,614,854,302,000
CONTRIBUTOR
null
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some ke...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1874/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1874", "html_url": "https://github.com/huggingface/datasets/pull/1874", "diff_url": "https://github.com/huggingface/datasets/pull/1874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1874.patch", "merged_at": 1614854302000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1873
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1873/comments
https://api.github.com/repos/huggingface/datasets/issues/1873/events
https://github.com/huggingface/datasets/pull/1873
807,750,745
MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy
1,873
add iapp_wiki_qa_squad
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,613,223,267,000
1,613,485,318,000
1,613,485,318,000
CONTRIBUTOR
null
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1873/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1873", "html_url": "https://github.com/huggingface/datasets/pull/1873", "diff_url": "https://github.com/huggingface/datasets/pull/1873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1873.patch", "merged_at": 1613485318000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1872
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1872/comments
https://api.github.com/repos/huggingface/datasets/issues/1872/events
https://github.com/huggingface/datasets/issues/1872
807,711,935
MDU6SXNzdWU4MDc3MTE5MzU=
1,872
Adding a new column to the dataset after set_format was called
{ "login": "villmow", "id": 2743060, "node_id": "MDQ6VXNlcjI3NDMwNjA=", "avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4", "gravatar_id": "", "url": "https://api.github.com/users/villmow", "html_url": "https://github.com/villmow", "followers_url": "https://api.github.com/users/villmow/...
[]
closed
false
null
[]
null
[ "Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column ...
1,613,207,675,000
1,617,112,905,000
1,617,112,905,000
NONE
null
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/1872/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1871
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1871/comments
https://api.github.com/repos/huggingface/datasets/issues/1871/events
https://github.com/huggingface/datasets/pull/1871
807,697,671
MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz
1,871
Add newspop dataset
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankie...
[]
closed
false
null
[]
null
[ "Thanks for the changes :)\r\nmerging" ]
1,613,201,483,000
1,615,198,365,000
1,615,198,365,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1871/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1871", "html_url": "https://github.com/huggingface/datasets/pull/1871", "diff_url": "https://github.com/huggingface/datasets/pull/1871.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1871.patch", "merged_at": 1615198365000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1870
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1870/comments
https://api.github.com/repos/huggingface/datasets/issues/1870/events
https://github.com/huggingface/datasets/pull/1870
807,306,564
MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4
1,870
Implement Dataset add_item
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/milestones/3", "html_url": "https://github.com/huggingface/datasets/milestone/3", "labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels", "id": 6644287, "node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==", "number": 3, "title...
[ "Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.", "Sure ! I opened an issue #1877 so we can discuss this specific aspect :)", "I am going to implement this consolidation step ...
1,613,142,226,000
1,619,172,091,000
1,619,172,091,000
MEMBER
null
Implement `Dataset.add_item`. Close #1854.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1870/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1870", "html_url": "https://github.com/huggingface/datasets/pull/1870", "diff_url": "https://github.com/huggingface/datasets/pull/1870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1870.patch", "merged_at": 1619172090000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1869/comments
https://api.github.com/repos/huggingface/datasets/issues/1869/events
https://github.com/huggingface/datasets/pull/1869
807,159,835
MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy
1,869
Remove outdated commands in favor of huggingface-cli
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,613,129,290,000
1,613,146,389,000
1,613,146,388,000
MEMBER
null
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1869/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1869", "html_url": "https://github.com/huggingface/datasets/pull/1869", "diff_url": "https://github.com/huggingface/datasets/pull/1869.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1869.patch", "merged_at": 1613146388000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1868
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1868/comments
https://api.github.com/repos/huggingface/datasets/issues/1868/events
https://github.com/huggingface/datasets/pull/1868
807,138,159
MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0
1,868
Update oscar sizes
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,613,127,335,000
1,613,127,787,000
1,613,127,786,000
MEMBER
null
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1868/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1868", "html_url": "https://github.com/huggingface/datasets/pull/1868", "diff_url": "https://github.com/huggingface/datasets/pull/1868.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1868.patch", "merged_at": 1613127786000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1867
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1867/comments
https://api.github.com/repos/huggingface/datasets/issues/1867/events
https://github.com/huggingface/datasets/issues/1867
807,127,181
MDU6SXNzdWU4MDcxMjcxODE=
1,867
ERROR WHEN USING SET_TRANSFORM()
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/...
1,613,126,311,000
1,614,607,464,000
1,614,168,043,000
NONE
null
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional arg...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1867/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1866
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1866/comments
https://api.github.com/repos/huggingface/datasets/issues/1866/events
https://github.com/huggingface/datasets/pull/1866
807,017,816
MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1
1,866
Add dataset for Financial PhraseBank
{ "login": "frankier", "id": 299380, "node_id": "MDQ6VXNlcjI5OTM4MA==", "avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4", "gravatar_id": "", "url": "https://api.github.com/users/frankier", "html_url": "https://github.com/frankier", "followers_url": "https://api.github.com/users/frankie...
[]
closed
false
null
[]
null
[ "Thanks for the feedback. All accepted and metadata regenerated." ]
1,613,115,056,000
1,613,571,756,000
1,613,571,756,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1866/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1866", "html_url": "https://github.com/huggingface/datasets/pull/1866", "diff_url": "https://github.com/huggingface/datasets/pull/1866.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1866.patch", "merged_at": 1613571756000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1865
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1865/comments
https://api.github.com/repos/huggingface/datasets/issues/1865/events
https://github.com/huggingface/datasets/pull/1865
806,388,290
MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2
1,865
Updated OPUS Open Subtitles Dataset with metadata information
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Val...
[]
closed
false
null
[]
null
[ "Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of th...
1,613,049,986,000
1,613,738,289,000
1,613,149,184,000
CONTRIBUTOR
null
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninst...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1865/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1865", "html_url": "https://github.com/huggingface/datasets/pull/1865", "diff_url": "https://github.com/huggingface/datasets/pull/1865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1865.patch", "merged_at": 1613149184000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1864
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1864/comments
https://api.github.com/repos/huggingface/datasets/issues/1864/events
https://github.com/huggingface/datasets/issues/1864
806,172,843
MDU6SXNzdWU4MDYxNzI4NDM=
1,864
Add Winogender Schemas
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/use...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias" ]
1,613,031,518,000
1,613,031,591,000
1,613,031,591,000
CONTRIBUTOR
null
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1864/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1863
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1863/comments
https://api.github.com/repos/huggingface/datasets/issues/1863/events
https://github.com/huggingface/datasets/issues/1863
806,171,311
MDU6SXNzdWU4MDYxNzEzMTE=
1,863
Add WikiCREM
{ "login": "NielsRogge", "id": 48327001, "node_id": "MDQ6VXNlcjQ4MzI3MDAx", "avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4", "gravatar_id": "", "url": "https://api.github.com/users/NielsRogge", "html_url": "https://github.com/NielsRogge", "followers_url": "https://api.github.com/use...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!", "Hi @udapy, are you working on this?" ]
1,613,031,360,000
1,615,102,033,000
null
CONTRIBUTOR
null
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1863/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1862
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1862/comments
https://api.github.com/repos/huggingface/datasets/issues/1862/events
https://github.com/huggingface/datasets/pull/1862
805,722,293
MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx
1,862
Fix writing GPU Faiss index
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,612,978,323,000
1,612,981,068,000
1,612,981,067,000
MEMBER
null
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1862/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1862", "html_url": "https://github.com/huggingface/datasets/pull/1862", "diff_url": "https://github.com/huggingface/datasets/pull/1862.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1862.patch", "merged_at": 1612981067000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1861
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1861/comments
https://api.github.com/repos/huggingface/datasets/issues/1861/events
https://github.com/huggingface/datasets/pull/1861
805,631,215
MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1
1,861
Fix Limit url
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,612,971,896,000
1,612,973,700,000
1,612,973,699,000
MEMBER
null
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1861/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1861", "html_url": "https://github.com/huggingface/datasets/pull/1861", "diff_url": "https://github.com/huggingface/datasets/pull/1861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1861.patch", "merged_at": 1612973698000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1860/comments
https://api.github.com/repos/huggingface/datasets/issues/1860/events
https://github.com/huggingface/datasets/pull/1860
805,510,037
MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz
1,860
Add loading from the Datasets Hub + add relative paths in download manager
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documen...
1,612,963,451,000
1,613,157,210,000
1,613,157,209,000
MEMBER
null
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_data...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1860/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1860", "html_url": "https://github.com/huggingface/datasets/pull/1860", "diff_url": "https://github.com/huggingface/datasets/pull/1860.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1860.patch", "merged_at": 1613157209000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1859/comments
https://api.github.com/repos/huggingface/datasets/issues/1859/events
https://github.com/huggingface/datasets/issues/1859
805,479,025
MDU6SXNzdWU4MDU0NzkwMjU=
1,859
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
{ "login": "corticalstack", "id": 3995321, "node_id": "MDQ6VXNlcjM5OTUzMjE=", "avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/corticalstack", "html_url": "https://github.com/corticalstack", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
[ "Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR", "I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next...
1,612,960,860,000
1,612,981,932,000
1,612,981,067,000
NONE
null
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_availabl...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1859/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1858
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1858/comments
https://api.github.com/repos/huggingface/datasets/issues/1858/events
https://github.com/huggingface/datasets/pull/1858
805,477,774
MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx
1,858
Clean config getenvs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,612,960,754,000
1,612,972,350,000
1,612,972,349,000
MEMBER
null
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1858/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1858", "html_url": "https://github.com/huggingface/datasets/pull/1858", "diff_url": "https://github.com/huggingface/datasets/pull/1858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1858.patch", "merged_at": 1612972349000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1857
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1857/comments
https://api.github.com/repos/huggingface/datasets/issues/1857/events
https://github.com/huggingface/datasets/issues/1857
805,391,107
MDU6SXNzdWU4MDUzOTExMDc=
1,857
Unable to upload "community provided" dataset - 400 Client Error
{ "login": "mwrzalik", "id": 1376337, "node_id": "MDQ6VXNlcjEzNzYzMzc=", "avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mwrzalik", "html_url": "https://github.com/mwrzalik", "followers_url": "https://api.github.com/users/mwrza...
[]
closed
false
null
[]
null
[ "Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c ma...
1,612,953,541,000
1,627,967,173,000
1,627,967,173,000
CONTRIBUTOR
null
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1857/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1856
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1856/comments
https://api.github.com/repos/huggingface/datasets/issues/1856/events
https://github.com/huggingface/datasets/issues/1856
805,360,200
MDU6SXNzdWU4MDUzNjAyMDA=
1,856
load_dataset("amazon_polarity") NonMatchingChecksumError
{ "login": "yanxi0830", "id": 19946372, "node_id": "MDQ6VXNlcjE5OTQ2Mzcy", "avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yanxi0830", "html_url": "https://github.com/yanxi0830", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`", "+1 encountering this issue as well", "@l...
1,612,951,256,000
1,647,352,524,000
1,647,352,523,000
NONE
null
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1856/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1855
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1855/comments
https://api.github.com/repos/huggingface/datasets/issues/1855/events
https://github.com/huggingface/datasets/pull/1855
805,256,579
MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3
1,855
Minor fix in the docs
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[]
1,612,942,063,000
1,612,960,389,000
1,612,960,389,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1855/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1855", "html_url": "https://github.com/huggingface/datasets/pull/1855", "diff_url": "https://github.com/huggingface/datasets/pull/1855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1855.patch", "merged_at": 1612960389000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1854
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1854/comments
https://api.github.com/repos/huggingface/datasets/issues/1854/events
https://github.com/huggingface/datasets/issues/1854
805,204,397
MDU6SXNzdWU4MDUyMDQzOTc=
1,854
Feature Request: Dataset.add_item
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/ss...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\...
1,612,937,160,000
1,619,172,090,000
1,619,172,090,000
CONTRIBUTOR
null
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1854/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1853
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1853/comments
https://api.github.com/repos/huggingface/datasets/issues/1853/events
https://github.com/huggingface/datasets/pull/1853
804,791,166
MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4
1,853
Configure library root logger at the module level
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[]
1,612,894,272,000
1,612,960,354,000
1,612,960,354,000
MEMBER
null
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1853/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1853", "html_url": "https://github.com/huggingface/datasets/pull/1853", "diff_url": "https://github.com/huggingface/datasets/pull/1853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1853.patch", "merged_at": 1612960354000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1852
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1852/comments
https://api.github.com/repos/huggingface/datasets/issues/1852/events
https://github.com/huggingface/datasets/pull/1852
804,633,033
MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1
1,852
Add Arabic Speech Corpus
{ "login": "zaidalyafeai", "id": 15667714, "node_id": "MDQ6VXNlcjE1NjY3NzE0", "avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zaidalyafeai", "html_url": "https://github.com/zaidalyafeai", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[]
1,612,882,946,000
1,613,038,735,000
1,613,038,735,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1852/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1852", "html_url": "https://github.com/huggingface/datasets/pull/1852", "diff_url": "https://github.com/huggingface/datasets/pull/1852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1852.patch", "merged_at": 1613038734000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1851
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1851/comments
https://api.github.com/repos/huggingface/datasets/issues/1851/events
https://github.com/huggingface/datasets/pull/1851
804,523,174
MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5
1,851
set bert_score version dependency
{ "login": "pvl", "id": 3596, "node_id": "MDQ6VXNlcjM1OTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pvl", "html_url": "https://github.com/pvl", "followers_url": "https://api.github.com/users/pvl/followers", "following_u...
[]
closed
false
null
[]
null
[]
1,612,875,067,000
1,612,880,508,000
1,612,880,508,000
CONTRIBUTOR
null
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1851/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1851", "html_url": "https://github.com/huggingface/datasets/pull/1851", "diff_url": "https://github.com/huggingface/datasets/pull/1851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1851.patch", "merged_at": 1612880508000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1850
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1850/comments
https://api.github.com/repos/huggingface/datasets/issues/1850/events
https://github.com/huggingface/datasets/pull/1850
804,412,249
MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx
1,850
Add cord 19 dataset
{ "login": "ggdupont", "id": 5583410, "node_id": "MDQ6VXNlcjU1ODM0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggdupont", "html_url": "https://github.com/ggdupont", "followers_url": "https://api.github.com/users/ggdup...
[]
closed
false
null
[]
null
[ "Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129", "@lhoestq FYI", "Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today", "Looks all good now ! Thanks...
1,612,866,128,000
1,612,883,786,000
1,612,883,786,000
CONTRIBUTOR
null
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIG...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1850/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1850", "html_url": "https://github.com/huggingface/datasets/pull/1850", "diff_url": "https://github.com/huggingface/datasets/pull/1850.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1850.patch", "merged_at": 1612883785000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1849
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1849/comments
https://api.github.com/repos/huggingface/datasets/issues/1849/events
https://github.com/huggingface/datasets/issues/1849
804,292,971
MDU6SXNzdWU4MDQyOTI5NzE=
1,849
Add TIMIT
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[ "@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n", "Hey @vrindaprabhu - sure I'...
1,612,855,781,000
1,615,787,977,000
1,615,787,977,000
MEMBER
null
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1849/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1848
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1848/comments
https://api.github.com/repos/huggingface/datasets/issues/1848/events
https://github.com/huggingface/datasets/pull/1848
803,826,506
MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1
1,848
Refactoring: Create config module
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[]
1,612,809,831,000
1,612,960,175,000
1,612,960,175,000
MEMBER
null
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1848/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1848", "html_url": "https://github.com/huggingface/datasets/pull/1848", "diff_url": "https://github.com/huggingface/datasets/pull/1848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1848.patch", "merged_at": 1612960175000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1847
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1847/comments
https://api.github.com/repos/huggingface/datasets/issues/1847/events
https://github.com/huggingface/datasets/pull/1847
803,824,694
MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0
1,847
[Metrics] Add word error metric metric
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "Feel free to merge once the CI is all green ;)" ]
1,612,809,675,000
1,612,893,201,000
1,612,893,201,000
MEMBER
null
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1847/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1847", "html_url": "https://github.com/huggingface/datasets/pull/1847", "diff_url": "https://github.com/huggingface/datasets/pull/1847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1847.patch", "merged_at": 1612893201000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1846/comments
https://api.github.com/repos/huggingface/datasets/issues/1846/events
https://github.com/huggingface/datasets/pull/1846
803,806,380
MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy
1,846
Make DownloadManager downloaded/extracted paths accessible
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[ "First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...", "There could ...
1,612,808,082,000
1,614,262,218,000
1,614,262,218,000
MEMBER
null
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1846/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1846", "html_url": "https://github.com/huggingface/datasets/pull/1846", "diff_url": "https://github.com/huggingface/datasets/pull/1846.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1846.patch", "merged_at": 1614262218000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1845
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1845/comments
https://api.github.com/repos/huggingface/datasets/issues/1845/events
https://github.com/huggingface/datasets/pull/1845
803,714,493
MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz
1,845
Enable logging propagation and remove logging handler
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- ...
1,612,801,333,000
1,612,880,558,000
1,612,880,557,000
MEMBER
null
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also re...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1845/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1845", "html_url": "https://github.com/huggingface/datasets/pull/1845", "diff_url": "https://github.com/huggingface/datasets/pull/1845.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1845.patch", "merged_at": 1612880557000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1844
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1844/comments
https://api.github.com/repos/huggingface/datasets/issues/1844/events
https://github.com/huggingface/datasets/issues/1844
803,588,125
MDU6SXNzdWU4MDM1ODgxMjU=
1,844
Update Open Subtitles corpus with original sentence IDs
{ "login": "Valahaar", "id": 19476123, "node_id": "MDQ6VXNlcjE5NDc2MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Valahaar", "html_url": "https://github.com/Valahaar", "followers_url": "https://api.github.com/users/Val...
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" } ]
closed
false
null
[]
null
[ "Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles...
1,612,792,513,000
1,613,151,538,000
1,613,151,538,000
CONTRIBUTOR
null
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1844/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1843
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1843/comments
https://api.github.com/repos/huggingface/datasets/issues/1843/events
https://github.com/huggingface/datasets/issues/1843
803,565,393
MDU6SXNzdWU4MDM1NjUzOTM=
1,843
MustC Speech Translation
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[ "Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ", "That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `d...
1,612,790,865,000
1,621,004,014,000
null
MEMBER
null
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1843/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1842
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1842/comments
https://api.github.com/repos/huggingface/datasets/issues/1842/events
https://github.com/huggingface/datasets/issues/1842
803,563,149
MDU6SXNzdWU4MDM1NjMxNDk=
1,842
Add AMI Corpus
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[]
1,612,790,700,000
1,612,855,576,000
null
MEMBER
null
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elic...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1842/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1841
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1841/comments
https://api.github.com/repos/huggingface/datasets/issues/1841/events
https://github.com/huggingface/datasets/issues/1841
803,561,123
MDU6SXNzdWU4MDM1NjExMjM=
1,841
Add ljspeech
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[]
1,612,790,546,000
1,615,787,942,000
1,615,787,942,000
MEMBER
null
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of ap...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1841/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1840
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1840/comments
https://api.github.com/repos/huggingface/datasets/issues/1840/events
https://github.com/huggingface/datasets/issues/1840
803,560,039
MDU6SXNzdWU4MDM1NjAwMzk=
1,840
Add common voice
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[ "I have started working on adding this dataset.", "Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the downloa...
1,612,790,465,000
1,647,789,820,000
1,615,787,781,000
MEMBER
null
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/dat...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1840/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1839
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1839/comments
https://api.github.com/repos/huggingface/datasets/issues/1839/events
https://github.com/huggingface/datasets/issues/1839
803,559,164
MDU6SXNzdWU4MDM1NTkxNjQ=
1,839
Add Voxforge
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[]
1,612,790,396,000
1,612,790,911,000
null
MEMBER
null
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constant...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1839/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1838
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1838/comments
https://api.github.com/repos/huggingface/datasets/issues/1838/events
https://github.com/huggingface/datasets/issues/1838
803,557,521
MDU6SXNzdWU4MDM1NTc1MjE=
1,838
Add tedlium
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[ "Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0" ]
1,612,790,272,000
1,617,983,861,000
null
MEMBER
null
## Adding a Dataset - **Name:** *tedlium* - **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.* - **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1838/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1837
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1837/comments
https://api.github.com/repos/huggingface/datasets/issues/1837/events
https://github.com/huggingface/datasets/issues/1837
803,555,650
MDU6SXNzdWU4MDM1NTU2NTA=
1,837
Add VCTK
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
closed
false
null
[]
null
[ "@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me k...
1,612,790,128,000
1,640,703,908,000
1,640,703,908,000
MEMBER
null
## Adding a Dataset - **Name:** *VCTK* - **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1837/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1836
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1836/comments
https://api.github.com/repos/huggingface/datasets/issues/1836/events
https://github.com/huggingface/datasets/issues/1836
803,531,837
MDU6SXNzdWU4MDM1MzE4Mzc=
1,836
test.json has been removed from the limit dataset repo (breaks dataset)
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/fo...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "Thanks for the heads up ! I'm opening a PR to fix that" ]
1,612,788,353,000
1,612,973,698,000
1,612,973,698,000
NONE
null
https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51 The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works: `https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1836/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1835
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1835/comments
https://api.github.com/repos/huggingface/datasets/issues/1835/events
https://github.com/huggingface/datasets/issues/1835
803,524,790
MDU6SXNzdWU4MDM1MjQ3OTA=
1,835
Add CHiME4 dataset
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 2725241052, ...
open
false
null
[]
null
[]
1,612,787,798,000
1,612,790,011,000
null
MEMBER
null
## Adding a Dataset - **Name:** Chime4 - **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR - **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1835/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1834
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1834/comments
https://api.github.com/repos/huggingface/datasets/issues/1834/events
https://github.com/huggingface/datasets/pull/1834
803,517,094
MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4
1,834
Fixes base_url of limit dataset
{ "login": "Paethon", "id": 237550, "node_id": "MDQ6VXNlcjIzNzU1MA==", "avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Paethon", "html_url": "https://github.com/Paethon", "followers_url": "https://api.github.com/users/Paethon/fo...
[]
closed
false
null
[]
null
[ "OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue." ]
1,612,787,195,000
1,612,788,170,000
1,612,788,170,000
NONE
null
`test.json` is not available in the master branch of the repository anymore. Linking to a specific commit.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1834/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1834", "html_url": "https://github.com/huggingface/datasets/pull/1834", "diff_url": "https://github.com/huggingface/datasets/pull/1834.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1834.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1833
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1833/comments
https://api.github.com/repos/huggingface/datasets/issues/1833/events
https://github.com/huggingface/datasets/pull/1833
803,120,978
MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx
1,833
Add OSCAR dataset card
{ "login": "pjox", "id": 635220, "node_id": "MDQ6VXNlcjYzNTIyMA==", "avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pjox", "html_url": "https://github.com/pjox", "followers_url": "https://api.github.com/users/pjox/followers", ...
[]
closed
false
null
[]
null
[ "@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ", "I just merged the tables as suggested 😄 . However I noticed somet...
1,612,748,389,000
1,613,138,965,000
1,613,138,904,000
CONTRIBUTOR
null
I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824).
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1833/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1833", "html_url": "https://github.com/huggingface/datasets/pull/1833", "diff_url": "https://github.com/huggingface/datasets/pull/1833.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1833.patch", "merged_at": 1613138904000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1832
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1832/comments
https://api.github.com/repos/huggingface/datasets/issues/1832/events
https://github.com/huggingface/datasets/issues/1832
802,880,897
MDU6SXNzdWU4MDI4ODA4OTc=
1,832
Looks like nokogumbo is up-to-date now, so this is no longer needed.
{ "login": "JimmyJim1", "id": 68724553, "node_id": "MDQ6VXNlcjY4NzI0NTUz", "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JimmyJim1", "html_url": "https://github.com/JimmyJim1", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,612,680,727,000
1,612,805,249,000
1,612,805,249,000
NONE
null
Looks like nokogumbo is up-to-date now, so this is no longer needed. __Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1832/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1831
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1831/comments
https://api.github.com/repos/huggingface/datasets/issues/1831/events
https://github.com/huggingface/datasets/issues/1831
802,868,854
MDU6SXNzdWU4MDI4Njg4NTQ=
1,831
Some question about raw dataset download info in the project .
{ "login": "svjack", "id": 27874014, "node_id": "MDQ6VXNlcjI3ODc0MDE0", "avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4", "gravatar_id": "", "url": "https://api.github.com/users/svjack", "html_url": "https://github.com/svjack", "followers_url": "https://api.github.com/users/svjack/fo...
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so ...
1,612,676,016,000
1,614,262,218,000
1,614,262,218,000
NONE
null
Hi , i review the code in https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py in the _split_generators function is the truly logic of download raw datasets with dl_manager and use Conll2003 cls by use import_main_class in load_dataset function My question is that , with this logic i...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1831/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1830
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1830/comments
https://api.github.com/repos/huggingface/datasets/issues/1830/events
https://github.com/huggingface/datasets/issues/1830
802,790,075
MDU6SXNzdWU4MDI3OTAwNzU=
1,830
using map on loaded Tokenizer 10x - 100x slower than default Tokenizer?
{ "login": "wumpusman", "id": 7662740, "node_id": "MDQ6VXNlcjc2NjI3NDA=", "avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wumpusman", "html_url": "https://github.com/wumpusman", "followers_url": "https://api.github.com/users/wu...
[]
open
false
null
[]
null
[ "Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your `map` for the cache\r\n2. apply your function on e...
1,612,645,226,000
1,614,203,774,000
null
NONE
null
This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower: ```` def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"): words_u...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1830/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1829
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1829/comments
https://api.github.com/repos/huggingface/datasets/issues/1829/events
https://github.com/huggingface/datasets/pull/1829
802,693,600
MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5
1,829
Add Tweet Eval Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,612,614,985,000
1,612,790,274,000
1,612,790,273,000
CONTRIBUTOR
null
Closes Draft PR #1407. Notes: 1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels. 2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1829/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1829", "html_url": "https://github.com/huggingface/datasets/pull/1829", "diff_url": "https://github.com/huggingface/datasets/pull/1829.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1829.patch", "merged_at": 1612790273000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1828
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1828/comments
https://api.github.com/repos/huggingface/datasets/issues/1828/events
https://github.com/huggingface/datasets/pull/1828
802,449,234
MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2
1,828
Add CelebA Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification...
1,612,556,455,000
1,613,657,827,000
1,613,657,827,000
CONTRIBUTOR
null
Trying to add CelebA Dataset. Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`. Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1828/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1828", "html_url": "https://github.com/huggingface/datasets/pull/1828", "diff_url": "https://github.com/huggingface/datasets/pull/1828.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1828.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1827
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1827/comments
https://api.github.com/repos/huggingface/datasets/issues/1827/events
https://github.com/huggingface/datasets/issues/1827
802,353,974
MDU6SXNzdWU4MDIzNTM5NzQ=
1,827
Regarding On-the-fly Data Loading
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Possible duplicate\r\n\r\n#1776 https://github.com/huggingface/datasets/issues/\r\n\r\nreally looking PR for this feature", "Hi @acul3 \r\n\r\nIssue #1776 talks about doing on-the-fly data pre-processing, which I think is solved in the next release as mentioned in the issue #1825. I also look forward to using t...
1,612,547,028,000
1,613,656,516,000
1,613,656,516,000
CONTRIBUTOR
null
Hi, I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point. Thanks, Gunjan
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1827/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1826
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1826/comments
https://api.github.com/repos/huggingface/datasets/issues/1826/events
https://github.com/huggingface/datasets/pull/1826
802,074,744
MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2
1,826
Print error message with filename when malformed CSV
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[]
1,612,523,279,000
1,612,892,367,000
1,612,892,367,000
MEMBER
null
Print error message specifying filename when malformed CSV file. Close #1821
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1826/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1826", "html_url": "https://github.com/huggingface/datasets/pull/1826", "diff_url": "https://github.com/huggingface/datasets/pull/1826.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1826.patch", "merged_at": 1612892366000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1825
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1825/comments
https://api.github.com/repos/huggingface/datasets/issues/1825/events
https://github.com/huggingface/datasets/issues/1825
802,073,925
MDU6SXNzdWU4MDIwNzM5MjU=
1,825
Datasets library not suitable for huge text datasets.
{ "login": "alexvaca0", "id": 35173563, "node_id": "MDQ6VXNlcjM1MTczNTYz", "avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alexvaca0", "html_url": "https://github.com/alexvaca0", "followers_url": "https://api.github.com/users/...
[]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "Hi ! Looks related to #861 \r\n\r\nYou are right: tokenizing a dataset using map takes a lot of space since it can store `input_ids` but also `token_type_ids`, `attention_mask` and `special_tokens_mask`. Moreover if your tokenization function returns python integers then by default they'll be stored as int64 which...
1,612,523,210,000
1,617,113,041,000
1,615,887,840,000
NONE
null
Hi, I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1825/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1824
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1824/comments
https://api.github.com/repos/huggingface/datasets/issues/1824/events
https://github.com/huggingface/datasets/pull/1824
802,048,281
MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3
1,824
Add OSCAR dataset card
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Hi @lhoestq! When are you planning to release the version with this dataset?\r\n\r\nBTW: What a huge README file :astonished:", "Next week !", "Closing in favor of #1833" ]
1,612,521,026,000
1,620,239,054,000
1,612,783,833,000
MEMBER
null
I started adding the dataset card for OSCAR ! For now it's just basic info for all the different configurations in `Dataset Structure`. In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions", "total_count": 4, "+1": 2, "-1": 0, "laugh": 0, "hooray": 1, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1824/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1824", "html_url": "https://github.com/huggingface/datasets/pull/1824", "diff_url": "https://github.com/huggingface/datasets/pull/1824.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1824.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1823
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1823/comments
https://api.github.com/repos/huggingface/datasets/issues/1823/events
https://github.com/huggingface/datasets/pull/1823
802,042,181
MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx
1,823
Add FewRel Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\n\r\nSorry for the late response. What do you mean when you say \"adding names to default config\"? Should I handle \"pid2name\" in the same config as \"default\"?", "Yes I was thinking of having the pid2name field available in the default configuration (and therefore only have one config). What d...
1,612,520,523,000
1,614,599,780,000
1,614,594,099,000
CONTRIBUTOR
null
Hi, This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757. I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1823/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1823", "html_url": "https://github.com/huggingface/datasets/pull/1823", "diff_url": "https://github.com/huggingface/datasets/pull/1823.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1823.patch", "merged_at": 1614594099000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1822
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1822/comments
https://api.github.com/repos/huggingface/datasets/issues/1822/events
https://github.com/huggingface/datasets/pull/1822
802,003,835
MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz
1,822
Add Hindi Discourse Analysis Natural Language Inference Dataset
{ "login": "avinsit123", "id": 33565881, "node_id": "MDQ6VXNlcjMzNTY1ODgx", "avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinsit123", "html_url": "https://github.com/avinsit123", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Could you also run `make style` to fix the CI check on code formatting ?", "@lhoestq completed and resolved all comments." ]
1,612,517,454,000
1,613,383,059,000
1,613,383,059,000
CONTRIBUTOR
null
# Dataset Card for Hindi Discourse Analysis Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#dat...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1822/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1822", "html_url": "https://github.com/huggingface/datasets/pull/1822", "diff_url": "https://github.com/huggingface/datasets/pull/1822.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1822.patch", "merged_at": 1613383059000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1821
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1821/comments
https://api.github.com/repos/huggingface/datasets/issues/1821/events
https://github.com/huggingface/datasets/issues/1821
801,747,647
MDU6SXNzdWU4MDE3NDc2NDc=
1,821
Provide better exception message when one of many files results in an exception
{ "login": "david-waterworth", "id": 5028974, "node_id": "MDQ6VXNlcjUwMjg5NzQ=", "avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4", "gravatar_id": "", "url": "https://api.github.com/users/david-waterworth", "html_url": "https://github.com/david-waterworth", "followers_url": "https://ap...
[]
closed
false
null
[]
null
[ "Hi!\r\n\r\nThank you for reporting this issue. I agree that the information about the exception should be more clear and explicit.\r\n\r\nI could take on this issue.\r\n\r\nOn the meantime, as you can see from the exception stack trace, HF Datasets uses pandas to read the CSV files. You can pass arguments to `pand...
1,612,486,143,000
1,612,892,367,000
1,612,892,367,000
NONE
null
I find when I process many files, i.e. ``` train_files = glob.glob('rain*.csv') validation_files = glob.glob(validation*.csv') datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files)) ``` I sometimes encounter an error due to one of the files being misformed (i.e. no dat...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1821/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1820
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1820/comments
https://api.github.com/repos/huggingface/datasets/issues/1820/events
https://github.com/huggingface/datasets/pull/1820
801,529,936
MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1
1,820
Add metrics usage examples and tests
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,612,463,030,000
1,612,533,601,000
1,612,533,600,000
MEMBER
null
All metrics finally have usage examples and proper fast + slow tests :) I added examples of usage for every metric, and I use doctest to make sure they all work as expected. For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only do...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1820/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1820", "html_url": "https://github.com/huggingface/datasets/pull/1820", "diff_url": "https://github.com/huggingface/datasets/pull/1820.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1820.patch", "merged_at": 1612533600000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1819
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1819/comments
https://api.github.com/repos/huggingface/datasets/issues/1819/events
https://github.com/huggingface/datasets/pull/1819
801,448,670
MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2
1,819
Fixed spelling `S3Fileystem` to `S3FileSystem`
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,612,456,606,000
1,612,457,547,000
1,612,457,546,000
MEMBER
null
Fixed documentation spelling errors. Wrong `S3Fileystem` Right `S3FileSystem`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1819/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1819", "html_url": "https://github.com/huggingface/datasets/pull/1819", "diff_url": "https://github.com/huggingface/datasets/pull/1819.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1819.patch", "merged_at": 1612457546000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1818
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1818/comments
https://api.github.com/repos/huggingface/datasets/issues/1818/events
https://github.com/huggingface/datasets/issues/1818
800,958,776
MDU6SXNzdWU4MDA5NTg3NzY=
1,818
Loading local dataset raise requests.exceptions.ConnectTimeout
{ "login": "Alxe1", "id": 15032072, "node_id": "MDQ6VXNlcjE1MDMyMDcy", "avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Alxe1", "html_url": "https://github.com/Alxe1", "followers_url": "https://api.github.com/users/Alxe1/follow...
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting. This was indeed a bug introduced when we moved the `json` dataset loader inside the `datasets` package (before that, the `json` loader was fetched online, as all the other dataset scripts).\r\n\r\nThis should be fixed on master now. Feel free to install `datasets` from source to try it o...
1,612,418,123,000
1,654,097,922,000
1,654,097,922,000
NONE
null
Load local dataset: ``` dataset = load_dataset('json', data_files=["../../data/json.json"]) train = dataset["train"] print(train.features) train1 = train.map(lambda x: {"labels": 1}) print(train1[:2]) ``` but it raised requests.exceptions.ConnectTimeout: ``` /Users/littlely/myvirtual/tf2/bin/python3.7 /Us...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1818/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1817
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1817/comments
https://api.github.com/repos/huggingface/datasets/issues/1817/events
https://github.com/huggingface/datasets/issues/1817
800,870,652
MDU6SXNzdWU4MDA4NzA2NTI=
1,817
pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500
{ "login": "LuCeHe", "id": 9610770, "node_id": "MDQ6VXNlcjk2MTA3NzA=", "avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LuCeHe", "html_url": "https://github.com/LuCeHe", "followers_url": "https://api.github.com/users/LuCeHe/foll...
[]
open
false
null
[]
null
[ "Hi !\r\nThe error you have is due to the `input_ids` column not having the same number of examples as the other columns.\r\nIndeed you're concatenating the `input_ids` at this line:\r\n\r\nhttps://github.com/LuCeHe/GenericTools/blob/431835d8e13ec24dceb5ee4dc4ae58f0e873b091/KerasTools/lm_preprocessing.py#L134\r\n\r...
1,612,405,823,000
1,612,706,664,000
null
NONE
null
I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end https://github.com/LuCeHe/GenericTools/blob/maste...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1817/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1816
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1816/comments
https://api.github.com/repos/huggingface/datasets/issues/1816/events
https://github.com/huggingface/datasets/pull/1816
800,660,995
MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx
1,816
Doc2dial rc update to latest version
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songf...
[]
closed
false
null
[]
null
[ "- update data loader and readme for latest version 1.0.1" ]
1,612,382,934,000
1,613,402,124,000
1,613,401,473,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1816/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1816", "html_url": "https://github.com/huggingface/datasets/pull/1816", "diff_url": "https://github.com/huggingface/datasets/pull/1816.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1816.patch", "merged_at": 1613401473000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1815
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1815/comments
https://api.github.com/repos/huggingface/datasets/issues/1815/events
https://github.com/huggingface/datasets/pull/1815
800,610,017
MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1
1,815
Add CCAligned Multilingual Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nWe already have some datasets that can have many many configurations possible.\r\nTo be able to support that, we allow to subclass BuilderConfig to add as many additional parameters as you may need.\r\nThis way users can load any language they want. For example the [bible_para](https://github.com/huggi...
1,612,378,792,000
1,614,601,983,000
1,614,594,981,000
CONTRIBUTOR
null
Hello, I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756. This dataset has two types - Document-Pairs, and Sentence-Pairs. The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to downlo...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1815/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1815", "html_url": "https://github.com/huggingface/datasets/pull/1815", "diff_url": "https://github.com/huggingface/datasets/pull/1815.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1815.patch", "merged_at": 1614594981000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1814
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1814/comments
https://api.github.com/repos/huggingface/datasets/issues/1814/events
https://github.com/huggingface/datasets/pull/1814
800,516,236
MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1
1,814
Add Freebase QA Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @lhoestq \r\n\r\nThanks for approving. Request you to close PR #1435 as well." ]
1,612,371,469,000
1,612,468,071,000
1,612,455,708,000
CONTRIBUTOR
null
Closes PR #1435. Fixed issues with PR #1809. Requesting @lhoestq to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1814/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1814", "html_url": "https://github.com/huggingface/datasets/pull/1814", "diff_url": "https://github.com/huggingface/datasets/pull/1814.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1814.patch", "merged_at": 1612455708000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1813
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1813/comments
https://api.github.com/repos/huggingface/datasets/issues/1813/events
https://github.com/huggingface/datasets/pull/1813
800,435,973
MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz
1,813
Support future datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,612,366,009,000
1,612,521,228,000
1,612,521,227,000
MEMBER
null
If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version. However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to mak...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1813/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1813", "html_url": "https://github.com/huggingface/datasets/pull/1813", "diff_url": "https://github.com/huggingface/datasets/pull/1813.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1813.patch", "merged_at": 1612521227000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1812
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1812/comments
https://api.github.com/repos/huggingface/datasets/issues/1812/events
https://github.com/huggingface/datasets/pull/1812
799,379,178
MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy
1,812
Add CIFAR-100 Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @lhoestq,\r\nI have updated with the changes from the review.", "Thanks for approving :)" ]
1,612,279,379,000
1,612,782,618,000
1,612,780,746,000
CONTRIBUTOR
null
Adding CIFAR-100 Dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1812/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1812", "html_url": "https://github.com/huggingface/datasets/pull/1812", "diff_url": "https://github.com/huggingface/datasets/pull/1812.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1812.patch", "merged_at": 1612780746000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1811
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1811/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1811/comments
https://api.github.com/repos/huggingface/datasets/issues/1811/events
https://github.com/huggingface/datasets/issues/1811
799,211,060
MDU6SXNzdWU3OTkyMTEwNjA=
1,811
Unable to add Multi-label Datasets
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Thanks for adding this dataset! As far as I know `supervised_keys` is mostly a holdover from TFDS, but isn't really used, so feel free to drop it (@lhoestq or @thomwolf correct me if I'm wrong). It definitely shouldn't be blocking :) ", "I can confirm that it comes from TFDS and is not used at the moment.", "...
1,612,266,656,000
1,613,657,791,000
1,613,657,791,000
CONTRIBUTOR
null
I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as `supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1811/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1811/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1810
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1810/comments
https://api.github.com/repos/huggingface/datasets/issues/1810/events
https://github.com/huggingface/datasets/issues/1810
799,168,650
MDU6SXNzdWU3OTkxNjg2NTA=
1,810
Add Hateful Memes Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" }, { "id": 3608941089, ...
open
false
null
[]
null
[ "I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?", "Also, I found the information for loading only subsets of the data [here](https://github.com/huggingface/datasets/blob/master/docs/source/splits.rst).", "Hi @lhoestq,\r\n\r\nRequest you to check ...
1,612,263,239,000
1,638,965,039,000
null
CONTRIBUTOR
null
## Add Hateful Memes Dataset - **Name:** Hateful Memes - **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set) - **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf) - **Data:** [Thi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1810/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1809
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1809/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1809/comments
https://api.github.com/repos/huggingface/datasets/issues/1809/events
https://github.com/huggingface/datasets/pull/1809
799,059,141
MDExOlB1bGxSZXF1ZXN0NTY1NzY4ODQz
1,809
Add FreebaseQA dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?", "Hi @lhoestq,\r\n\r\nI think this happened because of rebasing. I'm unable to remove the duorc commit from the branch. GEM, Arabic sarcasm datasets are also there. I ca...
1,612,254,953,000
1,612,372,505,000
1,612,370,586,000
CONTRIBUTOR
null
Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR. Requesting @lhoestq to review.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1809/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1809/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1809", "html_url": "https://github.com/huggingface/datasets/pull/1809", "diff_url": "https://github.com/huggingface/datasets/pull/1809.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1809.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1808
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1808/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1808/comments
https://api.github.com/repos/huggingface/datasets/issues/1808/events
https://github.com/huggingface/datasets/issues/1808
798,879,180
MDU6SXNzdWU3OTg4NzkxODA=
1,808
writing Datasets in a human readable format
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" }, { "id": 1935892912, "node_id": "MDU6...
closed
false
null
[]
null
[ "AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\r\nYou can convert the Arrow table to a pandas datafram...
1,612,234,540,000
1,654,097,893,000
1,654,097,893,000
NONE
null
Hi I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1808/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1808/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1807
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1807/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1807/comments
https://api.github.com/repos/huggingface/datasets/issues/1807/events
https://github.com/huggingface/datasets/pull/1807
798,823,591
MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5
1,807
Adding an aggregated dataset for the GEM benchmark
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[]
closed
false
null
[]
null
[ "Nice !" ]
1,612,226,393,000
1,612,306,121,000
1,612,289,218,000
MEMBER
null
This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation) The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1807/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1807/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1807", "html_url": "https://github.com/huggingface/datasets/pull/1807", "diff_url": "https://github.com/huggingface/datasets/pull/1807.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1807.patch", "merged_at": 1612289218000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1806
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1806/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1806/comments
https://api.github.com/repos/huggingface/datasets/issues/1806/events
https://github.com/huggingface/datasets/pull/1806
798,607,869
MDExOlB1bGxSZXF1ZXN0NTY1Mzk0ODIz
1,806
Update details to MLSUM dataset
{ "login": "padipadou", "id": 15138872, "node_id": "MDQ6VXNlcjE1MTM4ODcy", "avatar_url": "https://avatars.githubusercontent.com/u/15138872?v=4", "gravatar_id": "", "url": "https://api.github.com/users/padipadou", "html_url": "https://github.com/padipadou", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Thanks!" ]
1,612,204,512,000
1,612,205,188,000
1,612,205,181,000
CONTRIBUTOR
null
Update details to MLSUM dataset
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1806/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1806/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1806", "html_url": "https://github.com/huggingface/datasets/pull/1806", "diff_url": "https://github.com/huggingface/datasets/pull/1806.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1806.patch", "merged_at": 1612205181000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1805
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1805/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1805/comments
https://api.github.com/repos/huggingface/datasets/issues/1805/events
https://github.com/huggingface/datasets/issues/1805
798,498,053
MDU6SXNzdWU3OTg0OTgwNTM=
1,805
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next re...
1,612,196,057,000
1,615,041,166,000
1,615,041,166,000
CONTRIBUTOR
null
So, I have the following instances in my dataset ``` {'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of this increase in rotation?', 'answer': 'C', 'example_id': 'ARCCH_Mercury_7175875', 'options':[{'option_context': 'One effect of ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1805/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1805/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1804
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1804/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1804/comments
https://api.github.com/repos/huggingface/datasets/issues/1804/events
https://github.com/huggingface/datasets/pull/1804
798,483,881
MDExOlB1bGxSZXF1ZXN0NTY1MjkzMTc3
1,804
Add SICK dataset
{ "login": "calpt", "id": 36051308, "node_id": "MDQ6VXNlcjM2MDUxMzA4", "avatar_url": "https://avatars.githubusercontent.com/u/36051308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/calpt", "html_url": "https://github.com/calpt", "followers_url": "https://api.github.com/users/calpt/follow...
[]
closed
false
null
[]
null
[]
1,612,195,064,000
1,612,547,188,000
1,612,540,165,000
CONTRIBUTOR
null
Adds the SICK dataset (http://marcobaroni.org/composes/sick.html). Closes #1772. Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1804/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1804/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1804", "html_url": "https://github.com/huggingface/datasets/pull/1804", "diff_url": "https://github.com/huggingface/datasets/pull/1804.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1804.patch", "merged_at": 1612540165000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1803
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1803/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1803/comments
https://api.github.com/repos/huggingface/datasets/issues/1803/events
https://github.com/huggingface/datasets/issues/1803
798,243,904
MDU6SXNzdWU3OTgyNDM5MDQ=
1,803
Querying examples from big datasets is slower than small datasets
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Hello, @lhoestq / @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet? ", "Hi ! I'm pretty sure that it can be fixed by using the Arrow IPC file format instead of the raw streaming format but I ha...
1,612,177,703,000
1,628,100,661,000
1,628,100,642,000
MEMBER
null
After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets. For example ```python from datasets import load_dataset b1 = load_dataset("bookcorpus", split="train[:1%]") b50 = load_dataset("bookcorpus", split="train[:50%]") b100 = load_dataset("bookcorp...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1803/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1803/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1802
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1802/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1802/comments
https://api.github.com/repos/huggingface/datasets/issues/1802/events
https://github.com/huggingface/datasets/pull/1802
797,924,468
MDExOlB1bGxSZXF1ZXN0NTY0ODE4NDIy
1,802
add github of contributors
{ "login": "vasudevgupta7", "id": 53136577, "node_id": "MDQ6VXNlcjUzMTM2NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/53136577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vasudevgupta7", "html_url": "https://github.com/vasudevgupta7", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "@lhoestq Can you confirm if this format is fine? I will update cards based on your feedback.", "On HuggingFace side we also have a mapping of hf user => github user (GitHub info used to be required when signing up until not long ago – cc @gary149 @beurkinger) so we can also add a link to HF profile", "All the ...
1,612,151,359,000
1,612,346,992,000
1,612,346,790,000
CONTRIBUTOR
null
This PR will add contributors GitHub id at the end of every dataset cards.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1802/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1802/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1802", "html_url": "https://github.com/huggingface/datasets/pull/1802", "diff_url": "https://github.com/huggingface/datasets/pull/1802.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1802.patch", "merged_at": 1612346790000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1801
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1801/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1801/comments
https://api.github.com/repos/huggingface/datasets/issues/1801/events
https://github.com/huggingface/datasets/pull/1801
797,814,275
MDExOlB1bGxSZXF1ZXN0NTY0NzMwODYw
1,801
[GEM] Updated the source link of the data to update correct tokenized version.
{ "login": "mounicam", "id": 11708999, "node_id": "MDQ6VXNlcjExNzA4OTk5", "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mounicam", "html_url": "https://github.com/mounicam", "followers_url": "https://api.github.com/users/mou...
[]
closed
false
null
[]
null
[ "@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ", "Closed by https://github.com/huggingface/datasets/pull/1807" ]
1,612,127,839,000
1,612,271,858,000
1,612,271,848,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1801/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1801/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1801", "html_url": "https://github.com/huggingface/datasets/pull/1801", "diff_url": "https://github.com/huggingface/datasets/pull/1801.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1801.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1800
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1800/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1800/comments
https://api.github.com/repos/huggingface/datasets/issues/1800/events
https://github.com/huggingface/datasets/pull/1800
797,798,689
MDExOlB1bGxSZXF1ZXN0NTY0NzE5MjA3
1,800
Add DuoRC Dataset
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too." ]
1,612,123,319,000
1,612,328,505,000
1,612,306,166,000
CONTRIBUTOR
null
Hi, DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1800/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1800/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1800", "html_url": "https://github.com/huggingface/datasets/pull/1800", "diff_url": "https://github.com/huggingface/datasets/pull/1800.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1800.patch", "merged_at": 1612306166000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1799
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1799/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1799/comments
https://api.github.com/repos/huggingface/datasets/issues/1799/events
https://github.com/huggingface/datasets/pull/1799
797,789,439
MDExOlB1bGxSZXF1ZXN0NTY0NzEyMzUy
1,799
Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c…
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmi...
[]
closed
false
null
[]
null
[ "@yjernite Pushed all the changes you recommended. Thank you for your help!" ]
1,612,120,735,000
1,612,908,373,000
1,612,885,798,000
CONTRIBUTOR
null
This is a dataset I currently use my research and I realized some features are not being returned. Previous code was not using all available metadata and was kind of messy I fixed code to use all metadata and made some modification to be more efficient and better formatted. Please let me know if I need to ma...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1799/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1799/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1799", "html_url": "https://github.com/huggingface/datasets/pull/1799", "diff_url": "https://github.com/huggingface/datasets/pull/1799.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1799.patch", "merged_at": 1612885798000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1798
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1798/comments
https://api.github.com/repos/huggingface/datasets/issues/1798/events
https://github.com/huggingface/datasets/pull/1798
797,766,818
MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1
1,798
Add Arabic sarcasm dataset
{ "login": "mapmeld", "id": 643918, "node_id": "MDQ6VXNlcjY0MzkxOA==", "avatar_url": "https://avatars.githubusercontent.com/u/643918?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mapmeld", "html_url": "https://github.com/mapmeld", "followers_url": "https://api.github.com/users/mapmeld/fo...
[]
closed
false
null
[]
null
[ "@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data" ]
1,612,114,735,000
1,612,989,553,000
1,612,348,554,000
CONTRIBUTOR
null
This MIT license dataset: https://github.com/iabufarha/ArSarcasm Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1798/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1798", "html_url": "https://github.com/huggingface/datasets/pull/1798", "diff_url": "https://github.com/huggingface/datasets/pull/1798.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1798.patch", "merged_at": 1612348554000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1797
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1797/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1797/comments
https://api.github.com/repos/huggingface/datasets/issues/1797/events
https://github.com/huggingface/datasets/issues/1797
797,357,901
MDU6SXNzdWU3OTczNTc5MDE=
1,797
Connection error
{ "login": "smile0925", "id": 46243662, "node_id": "MDQ6VXNlcjQ2MjQzNjYy", "avatar_url": "https://avatars.githubusercontent.com/u/46243662?v=4", "gravatar_id": "", "url": "https://api.github.com/users/smile0925", "html_url": "https://github.com/smile0925", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)" ]
1,611,991,965,000
1,628,100,577,000
1,628,100,577,000
NONE
null
Hi I am hitting to the error, help me and thanks. `train_data = datasets.load_dataset("xsum", split="train")` `ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1797/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1797/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1796
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1796/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1796/comments
https://api.github.com/repos/huggingface/datasets/issues/1796/events
https://github.com/huggingface/datasets/issues/1796
797,329,905
MDU6SXNzdWU3OTczMjk5MDU=
1,796
Filter on dataset too much slowww
{ "login": "ayubSubhaniya", "id": 20911334, "node_id": "MDQ6VXNlcjIwOTExMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ayubSubhaniya", "html_url": "https://github.com/ayubSubhaniya", "followers_url": "https://api.githu...
[]
open
false
null
[]
null
[ "When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object.\r\n\r\n```\r\nds_table = dataset.data.filter(mask=dataset['flag'])\r\n```", "@thomwolf @lhoestq can you guys please take a look and recommend some solution.", "Hi ! Currently...
1,611,979,759,000
1,613,668,164,000
null
NONE
null
I have a dataset with 50M rows. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes. When I applied the `filter()` function it is taking too much time. I need to filter se...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1796/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1796/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1795
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1795/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1795/comments
https://api.github.com/repos/huggingface/datasets/issues/1795/events
https://github.com/huggingface/datasets/pull/1795
797,021,730
MDExOlB1bGxSZXF1ZXN0NTY0MDk5OTUz
1,795
Custom formatting for lazy map + arrow data extraction refactor
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "This PR is amazing!!!\r\n\r\nI only looked at `arrow_dataset.py` and `formatting/formatting.py` but those look good to me.\r\n\r\nMy only (tiny) concern is the name of the function: I don't think it's self-evident that `set_format` applies a generic transformation, and some people might not look too far into the d...
1,611,938,153,000
1,649,655,822,000
1,612,518,846,000
MEMBER
null
Hi ! This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions. While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1795/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1795/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1795", "html_url": "https://github.com/huggingface/datasets/pull/1795", "diff_url": "https://github.com/huggingface/datasets/pull/1795.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1795.patch", "merged_at": 1612518846000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1794
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1794/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1794/comments
https://api.github.com/repos/huggingface/datasets/issues/1794/events
https://github.com/huggingface/datasets/pull/1794
796,975,588
MDExOlB1bGxSZXF1ZXN0NTY0MDYyMTkw
1,794
Move silicone directory
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,611,934,395,000
1,611,937,899,000
1,611,937,898,000
MEMBER
null
The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1794/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1794/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1794", "html_url": "https://github.com/huggingface/datasets/pull/1794", "diff_url": "https://github.com/huggingface/datasets/pull/1794.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1794.patch", "merged_at": 1611937898000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1793
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1793/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1793/comments
https://api.github.com/repos/huggingface/datasets/issues/1793/events
https://github.com/huggingface/datasets/pull/1793
796,940,299
MDExOlB1bGxSZXF1ZXN0NTY0MDMzMjk0
1,793
Minor fix the docstring of load_metric
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[]
1,611,931,655,000
1,611,939,212,000
1,611,939,212,000
MEMBER
null
Minor fix: - duplicated attributes - format fix
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1793/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1793/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1793", "html_url": "https://github.com/huggingface/datasets/pull/1793", "diff_url": "https://github.com/huggingface/datasets/pull/1793.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1793.patch", "merged_at": 1611939212000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1792
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1792/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1792/comments
https://api.github.com/repos/huggingface/datasets/issues/1792/events
https://github.com/huggingface/datasets/pull/1792
796,934,627
MDExOlB1bGxSZXF1ZXN0NTY0MDI4NTk1
1,792
Allow loading dataset in-memory
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[ "I am wondering how to test their difference...", "> ring how to test their difference...\r\n\r\nHmm I don't think pyarrow exposes an API to check if a Table comes from a file that is memory-mapped. In particular since all the buffer/memory logic is in the C++ part of pyarrow.\r\n\r\nOtherwise we can still check ...
1,611,931,190,000
1,613,139,208,000
1,613,139,208,000
MEMBER
null
Allow loading datasets either from: - memory-mapped file (current implementation) - from file descriptor, copying data to physical memory Close #708
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1792/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1792/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1792", "html_url": "https://github.com/huggingface/datasets/pull/1792", "diff_url": "https://github.com/huggingface/datasets/pull/1792.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1792.patch", "merged_at": 1613139208000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1791
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1791/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1791/comments
https://api.github.com/repos/huggingface/datasets/issues/1791/events
https://github.com/huggingface/datasets/pull/1791
796,924,519
MDExOlB1bGxSZXF1ZXN0NTY0MDE5OTk3
1,791
Small fix with corrected logging of train vectors
{ "login": "TezRomacH", "id": 7549587, "node_id": "MDQ6VXNlcjc1NDk1ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/7549587?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TezRomacH", "html_url": "https://github.com/TezRomacH", "followers_url": "https://api.github.com/users/Te...
[]
closed
false
null
[]
null
[]
1,611,930,366,000
1,611,946,270,000
1,611,939,907,000
CONTRIBUTOR
null
Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1791/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1791/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1791", "html_url": "https://github.com/huggingface/datasets/pull/1791", "diff_url": "https://github.com/huggingface/datasets/pull/1791.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1791.patch", "merged_at": 1611939907000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1790
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1790/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1790/comments
https://api.github.com/repos/huggingface/datasets/issues/1790/events
https://github.com/huggingface/datasets/issues/1790
796,678,157
MDU6SXNzdWU3OTY2NzgxNTc=
1,790
ModuleNotFoundError: No module named 'apache_beam', when specific languages.
{ "login": "miyamonz", "id": 6331508, "node_id": "MDQ6VXNlcjYzMzE1MDg=", "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "gravatar_id": "", "url": "https://api.github.com/users/miyamonz", "html_url": "https://github.com/miyamonz", "followers_url": "https://api.github.com/users/miyam...
[]
open
false
null
[]
null
[ "Hi !\r\n\r\nApache Beam is a framework used to define data transformation pipelines. These pipeline can then be run in many runtimes: DataFlow, Spark, Flink, etc. There also exist a local runner called the DirectRunner.\r\nWikipedia is a dataset that requires some parsing, so to allow the processing to be run on t...
1,611,908,244,000
1,616,674,251,000
null
CONTRIBUTOR
null
```py import datasets wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets') ``` then `ModuleNotFoundError: No module named 'apache_beam'` happend. The error doesn't appear when it's '20200501.en'. I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1790/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1790/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1789
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1789/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1789/comments
https://api.github.com/repos/huggingface/datasets/issues/1789/events
https://github.com/huggingface/datasets/pull/1789
796,229,721
MDExOlB1bGxSZXF1ZXN0NTYzNDQyMTc2
1,789
[BUG FIX] typo in the import path for metrics
{ "login": "yjernite", "id": 10469459, "node_id": "MDQ6VXNlcjEwNDY5NDU5", "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yjernite", "html_url": "https://github.com/yjernite", "followers_url": "https://api.github.com/users/yje...
[]
closed
false
null
[]
null
[]
1,611,856,897,000
1,611,857,636,000
1,611,857,636,000
MEMBER
null
This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1789/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1789/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1789", "html_url": "https://github.com/huggingface/datasets/pull/1789", "diff_url": "https://github.com/huggingface/datasets/pull/1789.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1789.patch", "merged_at": 1611857635000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1788
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1788/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1788/comments
https://api.github.com/repos/huggingface/datasets/issues/1788/events
https://github.com/huggingface/datasets/pull/1788
795,544,422
MDExOlB1bGxSZXF1ZXN0NTYyODc1NzA2
1,788
Doc2dial rc
{ "login": "songfeng", "id": 2062185, "node_id": "MDQ6VXNlcjIwNjIxODU=", "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "gravatar_id": "", "url": "https://api.github.com/users/songfeng", "html_url": "https://github.com/songfeng", "followers_url": "https://api.github.com/users/songf...
[]
closed
false
null
[]
null
[]
1,611,791,460,000
1,611,859,573,000
1,611,859,573,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1788/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1788/timeline
null
null
true
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1788", "html_url": "https://github.com/huggingface/datasets/pull/1788", "diff_url": "https://github.com/huggingface/datasets/pull/1788.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1788.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1787
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1787/comments
https://api.github.com/repos/huggingface/datasets/issues/1787/events
https://github.com/huggingface/datasets/pull/1787
795,485,842
MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3
1,787
Update the CommonGen citation information
{ "login": "yuchenlin", "id": 10104354, "node_id": "MDQ6VXNlcjEwMTA0MzU0", "avatar_url": "https://avatars.githubusercontent.com/u/10104354?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yuchenlin", "html_url": "https://github.com/yuchenlin", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,611,785,567,000
1,611,842,189,000
1,611,842,189,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1787/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1787", "html_url": "https://github.com/huggingface/datasets/pull/1787", "diff_url": "https://github.com/huggingface/datasets/pull/1787.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1787.patch", "merged_at": 1611842189000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1786
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1786/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1786/comments
https://api.github.com/repos/huggingface/datasets/issues/1786/events
https://github.com/huggingface/datasets/issues/1786
795,462,816
MDU6SXNzdWU3OTU0NjI4MTY=
1,786
How to use split dataset
{ "login": "kkhan188", "id": 78090287, "node_id": "MDQ6VXNlcjc4MDkwMjg3", "avatar_url": "https://avatars.githubusercontent.com/u/78090287?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kkhan188", "html_url": "https://github.com/kkhan188", "followers_url": "https://api.github.com/users/kkh...
[ { "id": 1935892912, "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question", "name": "question", "color": "d876e3", "default": true, "description": "Further information is requested" } ]
closed
false
null
[]
null
[ "By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nf...
1,611,783,467,000
1,619,191,059,000
1,619,191,059,000
NONE
null
![Capture1](https://user-images.githubusercontent.com/78090287/106057436-cb6a1f00-6111-11eb-8c9c-3658065b1fdf.PNG) Hey, I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1786/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1786/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1785
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1785/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1785/comments
https://api.github.com/repos/huggingface/datasets/issues/1785/events
https://github.com/huggingface/datasets/issues/1785
795,458,856
MDU6SXNzdWU3OTU0NTg4NTY=
1,785
Not enough disk space (Needed: Unknown size) when caching on a cluster
{ "login": "olinguyen", "id": 4341867, "node_id": "MDQ6VXNlcjQzNDE4Njc=", "avatar_url": "https://avatars.githubusercontent.com/u/4341867?v=4", "gravatar_id": "", "url": "https://api.github.com/users/olinguyen", "html_url": "https://github.com/olinguyen", "followers_url": "https://api.github.com/users/ol...
[]
closed
false
null
[]
null
[ "Hi ! \r\n\r\nWhat do you mean by \"disk_usage(\".\").free` can't compute on the cluster's shared disk\" exactly ?\r\nDoes it return 0 ?", "Yes, that's right. It shows 0 free space even though there is. I suspect it might have to do with permissions on the shared disk.\r\n\r\n```python\r\n>>> disk_usage(\".\")\r\...
1,611,783,059,000
1,654,438,570,000
1,611,968,876,000
CONTRIBUTOR
null
I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk. The exact error thrown: ```bash >>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path") OSError: Not eno...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1785/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1785/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1784
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1784/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1784/comments
https://api.github.com/repos/huggingface/datasets/issues/1784/events
https://github.com/huggingface/datasets/issues/1784
794,659,174
MDU6SXNzdWU3OTQ2NTkxNzQ=
1,784
JSONDecodeError on JSON with multiple lines
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full stacktrace please ? Also which version of datasets an...
1,611,706,762,000
1,612,082,838,000
1,612,082,838,000
CONTRIBUTOR
null
Hello :), I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported: ```json {"key1":11, "key2":12, "key3":13} {"key1":21, "key2":22, "key3":23} ``` But, when I try loading a dataset with th...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1784/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1784/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1783
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1783/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1783/comments
https://api.github.com/repos/huggingface/datasets/issues/1783/events
https://github.com/huggingface/datasets/issues/1783
794,544,495
MDU6SXNzdWU3OTQ1NDQ0OTU=
1,783
Dataset Examples Explorer
{ "login": "ChewKokWah", "id": 30875246, "node_id": "MDQ6VXNlcjMwODc1MjQ2", "avatar_url": "https://avatars.githubusercontent.com/u/30875246?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChewKokWah", "html_url": "https://github.com/ChewKokWah", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @ChewKokWah,\r\n\r\nWe're working on it! In the meantime, you can still find the dataset explorer at the following URL: https://huggingface.co/datasets/viewer/", "Glad to see that it still exist, this existing one is more than good enough for me, it is feature rich, simple to use and concise. \r\nHope similar...
1,611,693,542,000
1,612,187,924,000
1,612,187,924,000
NONE
null
In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version. Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1783/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1783/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1782
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1782/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1782/comments
https://api.github.com/repos/huggingface/datasets/issues/1782/events
https://github.com/huggingface/datasets/pull/1782
794,167,920
MDExOlB1bGxSZXF1ZXN0NTYxNzI5OTc3
1,782
Update pyarrow import warning
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,611,661,631,000
1,611,669,050,000
1,611,669,049,000
MEMBER
null
Update the minimum version to >=0.17.1 in the pyarrow version check and update the message. I also moved the check at the top of the __init__.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1782/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1782/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1782", "html_url": "https://github.com/huggingface/datasets/pull/1782", "diff_url": "https://github.com/huggingface/datasets/pull/1782.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1782.patch", "merged_at": 1611669049000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1781
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1781/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1781/comments
https://api.github.com/repos/huggingface/datasets/issues/1781/events
https://github.com/huggingface/datasets/issues/1781
793,914,556
MDU6SXNzdWU3OTM5MTQ1NTY=
1,781
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import
{ "login": "PalaashAgrawal", "id": 45964869, "node_id": "MDQ6VXNlcjQ1OTY0ODY5", "avatar_url": "https://avatars.githubusercontent.com/u/45964869?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PalaashAgrawal", "html_url": "https://github.com/PalaashAgrawal", "followers_url": "https://api.gi...
[]
open
false
null
[]
null
[ "Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgrade\r\n```", "We should bump up the version test...
1,611,634,715,000
1,611,661,656,000
null
NONE
null
I'm using Colab. And suddenly this morning, there is this error. Have a look below! ![screenshot-colab research google com-2021 01 26-08-15-36](https://user-images.githubusercontent.com/45964869/105799890-fdaf3b80-5fae-11eb-8f06-11b65cdccc30.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1781/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1781/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1780
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1780/comments
https://api.github.com/repos/huggingface/datasets/issues/1780/events
https://github.com/huggingface/datasets/pull/1780
793,882,132
MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy
1,780
Update SciFact URL
{ "login": "dwadden", "id": 3091916, "node_id": "MDQ6VXNlcjMwOTE5MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwadden", "html_url": "https://github.com/dwadden", "followers_url": "https://api.github.com/users/dwadden/...
[]
closed
false
null
[]
null
[ "Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets/scifact --save_infos --all_configs --ignore_verifications\r\n```\r\nThi...
1,611,629,346,000
1,611,859,680,000
1,611,829,185,000
CONTRIBUTOR
null
Hi, I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset! Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1780/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1780", "html_url": "https://github.com/huggingface/datasets/pull/1780", "diff_url": "https://github.com/huggingface/datasets/pull/1780.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1780.patch", "merged_at": 1611829185000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1779/comments
https://api.github.com/repos/huggingface/datasets/issues/1779/events
https://github.com/huggingface/datasets/pull/1779
793,539,703
MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5
1,779
Ignore definition line number of functions for caching
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,611,592,949,000
1,611,656,420,000
1,611,656,419,000
MEMBER
null
As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything. This is because we were not ignoring the line number definition f...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1779/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1779", "html_url": "https://github.com/huggingface/datasets/pull/1779", "diff_url": "https://github.com/huggingface/datasets/pull/1779.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1779.patch", "merged_at": 1611656419000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1778
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1778/comments
https://api.github.com/repos/huggingface/datasets/issues/1778/events
https://github.com/huggingface/datasets/pull/1778
793,474,507
MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1
1,778
Narrative QA Manual
{ "login": "rsanjaykamath", "id": 18527321, "node_id": "MDQ6VXNlcjE4NTI3MzIx", "avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4", "gravatar_id": "", "url": "https://api.github.com/users/rsanjaykamath", "html_url": "https://github.com/rsanjaykamath", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364", "Excellent comments. Thanks for those valuable suggestions. I changed everything as you have pointed out :) ", "I've copied the same template as NarrativeQA now. Please le...
1,611,588,151,000
1,611,912,914,000
1,611,912,891,000
CONTRIBUTOR
null
Submitting the manual version of Narrative QA script which requires a manual download from the original repository
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1778/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1778", "html_url": "https://github.com/huggingface/datasets/pull/1778", "diff_url": "https://github.com/huggingface/datasets/pull/1778.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1778.patch", "merged_at": 1611912891000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1777/comments
https://api.github.com/repos/huggingface/datasets/issues/1777/events
https://github.com/huggingface/datasets/issues/1777
793,273,770
MDU6SXNzdWU3OTMyNzM3NzA=
1,777
GPT2 MNLI training using run_glue.py
{ "login": "nlp-student", "id": 76427077, "node_id": "MDQ6VXNlcjc2NDI3MDc3", "avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nlp-student", "html_url": "https://github.com/nlp-student", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[]
1,611,572,032,000
1,611,573,173,000
1,611,573,173,000
NONE
null
Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets` Running this on Google Colab, ``` !python run_glue.py \ --model_name_or_path gpt2 \ --task_name mnli \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_gpu_train_batch_size 10 \ --gradient_accu...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1777/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1777/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1776
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1776/comments
https://api.github.com/repos/huggingface/datasets/issues/1776/events
https://github.com/huggingface/datasets/issues/1776
792,755,249
MDU6SXNzdWU3OTI3NTUyNDk=
1,776
[Question & Bug Report] Can we preprocess a dataset on the fly?
{ "login": "shuaihuaiyi", "id": 14048129, "node_id": "MDQ6VXNlcjE0MDQ4MTI5", "avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shuaihuaiyi", "html_url": "https://github.com/shuaihuaiyi", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?", "It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm...
1,611,480,504,000
1,621,484,158,000
1,621,484,158,000
NONE
null
I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache? BTW, I tried raising `writer_batch_si...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1776/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1775/comments
https://api.github.com/repos/huggingface/datasets/issues/1775/events
https://github.com/huggingface/datasets/issues/1775
792,742,120
MDU6SXNzdWU3OTI3NDIxMjA=
1,775
Efficient ways to iterate the dataset
{ "login": "zhongpeixiang", "id": 11826803, "node_id": "MDQ6VXNlcjExODI2ODAz", "avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zhongpeixiang", "html_url": "https://github.com/zhongpeixiang", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "It seems that selecting a subset of colums directly from the dataset, i.e., dataset[\"column\"], is slow.", "I was wrong, ```dataset[\"column\"]``` is fast." ]
1,611,474,871,000
1,611,481,839,000
1,611,481,839,000
CONTRIBUTOR
null
For a large dataset that does not fits the memory, how can I select only a subset of features from each example? If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this? Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1775/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1775/timeline
null
completed
null
null
false