comments_url stringlengths 70 70 | timeline_url stringlengths 70 70 | closed_at stringlengths 20 20 ⌀ | performed_via_github_app null | state_reason stringclasses 3
values | node_id stringlengths 18 32 | state stringclasses 2
values | assignees listlengths 0 4 | draft bool 2
classes | number int64 1.61k 6.73k | user dict | title stringlengths 1 290 | events_url stringlengths 68 68 | milestone dict | labels_url stringlengths 75 75 | created_at stringlengths 20 20 | active_lock_reason null | locked bool 1
class | assignee dict | pull_request dict | id int64 771M 2.18B | labels listlengths 0 4 | url stringlengths 61 61 | comments listlengths 0 30 | repository_url stringclasses 1
value | author_association stringclasses 3
values | body stringlengths 0 228k ⌀ | updated_at stringlengths 20 20 | html_url stringlengths 49 51 | reactions dict | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1912/comments | https://api.github.com/repos/huggingface/datasets/issues/1912/timeline | 2021-02-24T13:44:53Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx | closed | [] | false | 1,912 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Update: WMT - use mirror links | https://api.github.com/repos/huggingface/datasets/issues/1912/events | null | https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name} | 2021-02-19T13:42:34Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1912.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1912",
"merged_at": "2021-02-24T13:44:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1912.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 812,034,140 | [] | https://api.github.com/repos/huggingface/datasets/issues/1912 | [
"So much better - thank you for doing that, @lhoestq!",
"Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893",
"Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well."
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.
Now downloading the wmt datasets is blazing fast :)
cc @stas00 @patrickvonplaten | 2021-02-24T13:44:53Z | https://github.com/huggingface/datasets/pull/1912 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1912/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1911/comments | https://api.github.com/repos/huggingface/datasets/issues/1911/timeline | null | null | null | MDU6SXNzdWU4MTIwMDk5NTY= | open | [] | null | 1,911 | {
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"events_url": "https://api.github.com/users/ayubSubhaniya/events{/privacy}",
"followers_url": "https://api.github.com/users/ayubSubhaniya/followers",
"following_url": "https://api.github.com/users/ayubSubhaniya/following{/other_user}",
"g... | Saving processed dataset running infinitely | https://api.github.com/repos/huggingface/datasets/issues/1911/events | null | https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name} | 2021-02-19T13:09:19Z | null | false | null | null | 812,009,956 | [] | https://api.github.com/repos/huggingface/datasets/issues/1911 | [
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Save... | https://api.github.com/repos/huggingface/datasets | NONE | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table func... | 2021-02-23T07:34:44Z | https://github.com/huggingface/datasets/issues/1911 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1911/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | 2021-03-04T22:02:47Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | closed | [] | false | 1,910 | {
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_u... | Adding CoNLLpp dataset. | https://api.github.com/repos/huggingface/datasets/issues/1910/events | null | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | 2021-02-19T05:12:30Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910"
} | 811,697,108 | [] | https://api.github.com/repos/huggingface/datasets/issues/1910 | [
"It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-03-04T22:02:47Z | https://github.com/huggingface/datasets/pull/1910 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1910/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1907/comments | https://api.github.com/repos/huggingface/datasets/issues/1907/timeline | 2021-02-22T23:22:04Z | null | completed | MDU6SXNzdWU4MTE1MjA1Njk= | closed | [] | null | 1,907 | {
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"events_url": "https://api.github.com/users/francisco-perez-sorrosal/events{/privacy}",
"followers_url": "https://api.github.com/users/francisco-perez-sorrosal/followers",
"following_url": "https://api.github.com/users/francisco-perez-sorrosa... | DBPedia14 Dataset Checksum bug? | https://api.github.com/repos/huggingface/datasets/issues/1907/events | null | https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name} | 2021-02-18T22:25:48Z | null | false | null | null | 811,520,569 | [] | https://api.github.com/repos/huggingface/datasets/issues/1907 | [
"Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe er... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, i... | 2021-02-22T23:22:05Z | https://github.com/huggingface/datasets/issues/1907 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1906/comments | https://api.github.com/repos/huggingface/datasets/issues/1906/timeline | null | null | null | MDU6SXNzdWU4MTE0MDUyNzQ= | open | [] | null | 1,906 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url":... | Feature Request: Support for Pandas `Categorical` | https://api.github.com/repos/huggingface/datasets/issues/1906/events | null | https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name} | 2021-02-18T19:46:05Z | null | false | null | null | 811,405,274 | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | https://api.github.com/repos/huggingface/datasets/issues/1906 | [
"We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corre... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_... | 2021-02-23T14:38:50Z | https://github.com/huggingface/datasets/issues/1906 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1906/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1905/comments | https://api.github.com/repos/huggingface/datasets/issues/1905/timeline | 2021-02-20T22:01:30Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1 | closed | [] | true | 1,905 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url":... | Standardizing datasets.dtypes | https://api.github.com/repos/huggingface/datasets/issues/1905/events | null | https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name} | 2021-02-18T19:15:31Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1905.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1905",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1905.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1905"
} | 811,384,174 | [] | https://api.github.com/repos/huggingface/datasets/issues/1905 | [
"Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly."
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here).
This... | 2021-02-20T22:01:30Z | https://github.com/huggingface/datasets/pull/1905 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1905/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | 2021-02-18T17:10:01Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | closed | [] | false | 1,904 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix to_pandas for boolean ArrayXD | https://api.github.com/repos/huggingface/datasets/issues/1904/events | null | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | 2021-02-18T16:30:46Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"merged_at": "2021-02-18T17:10:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 811,260,904 | [] | https://api.github.com/repos/huggingface/datasets/issues/1904 | [
"Thanks!"
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pya... | 2021-02-18T17:10:03Z | https://github.com/huggingface/datasets/pull/1904 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1904/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1903/comments | https://api.github.com/repos/huggingface/datasets/issues/1903/timeline | 2021-03-01T09:39:12Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2 | closed | [] | false | 1,903 | {
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}",
"followers_url": "https://api.github.com/users/vrindaprabhu/followers",
"following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}",
"gist... | Initial commit for the addition of TIMIT dataset | https://api.github.com/repos/huggingface/datasets/issues/1903/events | null | https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name} | 2021-02-18T14:23:12Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1903",
"merged_at": "2021-03-01T09:39:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 811,145,531 | [] | https://api.github.com/repos/huggingface/datasets/issues/1903 | [
"@patrickvonplaten could you please review and help me close this PR?",
"@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my sid... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Below points needs to be addressed:
- Creation of dummy dataset is failing
- Need to check on the data representation
- License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania
Also the links (_except the download_) point to the ami corpus! ;-)
@patrickvonplaten ... | 2021-03-01T09:39:12Z | https://github.com/huggingface/datasets/pull/1903 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1903/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1902/comments | https://api.github.com/repos/huggingface/datasets/issues/1902/timeline | 2021-02-18T09:55:41Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1 | closed | [] | false | 1,902 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix setimes_2 wmt urls | https://api.github.com/repos/huggingface/datasets/issues/1902/events | null | https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name} | 2021-02-18T09:42:26Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1902.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1902",
"merged_at": "2021-02-18T09:55:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1902.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 810,931,171 | [] | https://api.github.com/repos/huggingface/datasets/issues/1902 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | Continuation of #1901
Some other urls were missing https | 2021-02-18T09:55:41Z | https://github.com/huggingface/datasets/pull/1902 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1902/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1901/comments | https://api.github.com/repos/huggingface/datasets/issues/1901/timeline | 2021-02-18T09:39:21Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy | closed | [] | false | 1,901 | {
"avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4",
"events_url": "https://api.github.com/users/YangWang92/events{/privacy}",
"followers_url": "https://api.github.com/users/YangWang92/followers",
"following_url": "https://api.github.com/users/YangWang92/following{/other_user}",
"gists_url":... | Fix OPUS dataset download errors | https://api.github.com/repos/huggingface/datasets/issues/1901/events | null | https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name} | 2021-02-18T07:39:41Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1901.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1901",
"merged_at": "2021-02-18T09:39:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1901.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 810,845,605 | [] | https://api.github.com/repos/huggingface/datasets/issues/1901 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Replace http to https.
https://github.com/huggingface/datasets/issues/854
https://discuss.huggingface.co/t/cannot-download-wmt16/2081
| 2021-02-18T15:07:20Z | https://github.com/huggingface/datasets/pull/1901 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1901/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1900/comments | https://api.github.com/repos/huggingface/datasets/issues/1900/timeline | 2021-02-19T18:27:11Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3 | closed | [] | false | 1,900 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url":... | Issue #1895: Bugfix for string_to_arrow timestamp[ns] support | https://api.github.com/repos/huggingface/datasets/issues/1900/events | null | https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name} | 2021-02-17T20:26:04Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1900.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1900",
"merged_at": "2021-02-19T18:27:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1900.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 810,512,488 | [] | https://api.github.com/repos/huggingface/datasets/issues/1900 | [
"OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!"
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Should resolve https://github.com/huggingface/datasets/issues/1895
The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.
While adding unit-testing, I noticed that support for the double/float t... | 2021-02-19T18:27:11Z | https://github.com/huggingface/datasets/pull/1900 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1900/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1899/comments | https://api.github.com/repos/huggingface/datasets/issues/1899/timeline | 2021-02-17T17:20:49Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4 | closed | [] | false | 1,899 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix: ALT - fix duplicated examples in alt-parallel | https://api.github.com/repos/huggingface/datasets/issues/1899/events | null | https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name} | 2021-02-17T15:53:56Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"merged_at": "2021-02-17T17:20:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 810,308,332 | [] | https://api.github.com/repos/huggingface/datasets/issues/1899 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that. | 2021-02-17T17:20:49Z | https://github.com/huggingface/datasets/pull/1899 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1899/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1898/comments | https://api.github.com/repos/huggingface/datasets/issues/1898/timeline | 2021-02-19T06:18:46Z | null | completed | MDU6SXNzdWU4MTAxNTcyNTE= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 1,898 | {
"avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4",
"events_url": "https://api.github.com/users/10-zin/events{/privacy}",
"followers_url": "https://api.github.com/users/10-zin/followers",
"following_url": "https://api.github.com/users/10-zin/following{/other_user}",
"gists_url": "https://a... | ALT dataset has repeating instances in all splits | https://api.github.com/repos/huggingface/datasets/issues/1898/events | null | https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name} | 2021-02-17T12:51:42Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | null | 810,157,251 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1898 | [
"Thanks for reporting. This looks like a very bad issue. I'm looking into it",
"I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch",
"Thanks!!! works perfectly in the blead... | https://api.github.com/repos/huggingface/datasets | NONE | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `exp... | 2021-02-19T06:18:46Z | https://github.com/huggingface/datasets/issues/1898 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1897/comments | https://api.github.com/repos/huggingface/datasets/issues/1897/timeline | 2021-02-17T13:15:15Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy | closed | [] | false | 1,897 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix PandasArrayExtensionArray conversion to native type | https://api.github.com/repos/huggingface/datasets/issues/1897/events | null | https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name} | 2021-02-17T11:48:24Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"merged_at": "2021-02-17T13:15:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 810,113,263 | [] | https://api.github.com/repos/huggingface/datasets/issues/1897 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna metho... | 2021-02-17T13:15:16Z | https://github.com/huggingface/datasets/pull/1897 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1897/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1895/comments | https://api.github.com/repos/huggingface/datasets/issues/1895/timeline | 2021-02-19T18:27:11Z | null | completed | MDU6SXNzdWU4MDk2MzAyNzE= | closed | [] | null | 1,895 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"events_url": "https://api.github.com/users/justin-yan/events{/privacy}",
"followers_url": "https://api.github.com/users/justin-yan/followers",
"following_url": "https://api.github.com/users/justin-yan/following{/other_user}",
"gists_url":... | Bug Report: timestamp[ns] not recognized | https://api.github.com/repos/huggingface/datasets/issues/1895/events | null | https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name} | 2021-02-16T20:38:04Z | null | false | null | null | 809,630,271 | [] | https://api.github.com/repos/huggingface/datasets/issues/1895 | [
"Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more cont... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The fact... | 2021-02-19T18:27:11Z | https://github.com/huggingface/datasets/issues/1895 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1895/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | null | null | MDU6SXNzdWU4MDk2MDk2NTQ= | open | [] | null | 1,894 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "h... | benchmarking against MMapIndexedDataset | https://api.github.com/repos/huggingface/datasets/issues/1894/events | null | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | 2021-02-16T20:04:58Z | null | false | null | null | 809,609,654 | [] | https://api.github.com/repos/huggingface/datasets/issues/1894 | [
"Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for read... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o... | 2021-02-17T18:52:28Z | https://github.com/huggingface/datasets/issues/1894 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1893/comments | https://api.github.com/repos/huggingface/datasets/issues/1893/timeline | 2021-03-03T17:42:02Z | null | completed | MDU6SXNzdWU4MDk1NTY1MDM= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 1,893 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | wmt19 is broken | https://api.github.com/repos/huggingface/datasets/issues/1893/events | null | https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name} | 2021-02-16T18:39:58Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | null | 809,556,503 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1893 | [
"This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?",
"Closing since this has been fixed by #1912"
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent c... | 2021-03-03T17:42:02Z | https://github.com/huggingface/datasets/issues/1893 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1893/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1892/comments | https://api.github.com/repos/huggingface/datasets/issues/1892/timeline | 2021-03-25T11:53:23Z | null | completed | MDU6SXNzdWU4MDk1NTQxNzQ= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 1,892 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | request to mirror wmt datasets, as they are really slow to download | https://api.github.com/repos/huggingface/datasets/issues/1892/events | null | https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name} | 2021-02-16T18:36:11Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | null | 809,554,174 | [] | https://api.github.com/repos/huggingface/datasets/issues/1892 | [
"Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check)... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | 2021-10-26T06:55:42Z | https://github.com/huggingface/datasets/issues/1892 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1892/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1891/comments | https://api.github.com/repos/huggingface/datasets/issues/1891/timeline | 2022-10-05T12:48:38Z | null | completed | MDU6SXNzdWU4MDk1NTAwMDE= | closed | [] | null | 1,891 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | suggestion to improve a missing dataset error | https://api.github.com/repos/huggingface/datasets/issues/1891/events | null | https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name} | 2021-02-16T18:29:13Z | null | false | null | null | 809,550,001 | [] | https://api.github.com/repos/huggingface/datasets/issues/1891 | [
"This is the current error thrown for missing datasets:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at C:\\Users\\Mario\\Desktop\\projects\\datasets\\missing_dataset\\missing_dataset.py or any data file in the same directory. Couldn't find 'missing_dataset' on the Hugging Face Hub either: FileNotFou... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`:
```
True, predict_with_generate=True)
Traceback (most recent call last):
... | 2022-10-05T12:48:38Z | https://github.com/huggingface/datasets/issues/1891 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1891/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1890/comments | https://api.github.com/repos/huggingface/datasets/issues/1890/timeline | 2021-02-16T15:12:33Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx | closed | [] | false | 1,890 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Reformat dataset cards section titles | https://api.github.com/repos/huggingface/datasets/issues/1890/events | null | https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name} | 2021-02-16T15:11:47Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1890.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1890",
"merged_at": "2021-02-16T15:12:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1890.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 809,395,586 | [] | https://api.github.com/repos/huggingface/datasets/issues/1890 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | Titles are formatted like [Foo](#foo) instead of just Foo | 2021-02-16T15:12:34Z | https://github.com/huggingface/datasets/pull/1890 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1890/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1889/comments | https://api.github.com/repos/huggingface/datasets/issues/1889/timeline | 2021-02-18T18:42:34Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz | closed | [] | false | 1,889 | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | Implement to_dict and to_pandas for Dataset | https://api.github.com/repos/huggingface/datasets/issues/1889/events | null | https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name} | 2021-02-16T12:38:19Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1889.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1889",
"merged_at": "2021-02-18T18:42:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1889.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 809,276,015 | [] | https://api.github.com/repos/huggingface/datasets/issues/1889 | [
"Next step is going to add these two in the documentation ^^"
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | With options to return a generator or the full dataset | 2021-02-18T18:42:37Z | https://github.com/huggingface/datasets/pull/1889 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1889/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1888/comments | https://api.github.com/repos/huggingface/datasets/issues/1888/timeline | 2021-02-16T11:58:57Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4 | closed | [] | false | 1,888 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Docs for adding new column on formatted dataset | https://api.github.com/repos/huggingface/datasets/issues/1888/events | null | https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name} | 2021-02-16T11:45:00Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1888.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1888",
"merged_at": "2021-02-16T11:58:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1888.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 809,241,123 | [] | https://api.github.com/repos/huggingface/datasets/issues/1888 | [
"Close #1872"
] | https://api.github.com/repos/huggingface/datasets | MEMBER | As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added
Close #1872 | 2021-03-30T14:01:03Z | https://github.com/huggingface/datasets/pull/1888 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1888/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1887/comments | https://api.github.com/repos/huggingface/datasets/issues/1887/timeline | 2021-02-19T09:41:59Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy | closed | [] | false | 1,887 | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | Implement to_csv for Dataset | https://api.github.com/repos/huggingface/datasets/issues/1887/events | null | https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name} | 2021-02-16T11:27:29Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1887",
"merged_at": "2021-02-19T09:41:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 809,229,809 | [] | https://api.github.com/repos/huggingface/datasets/issues/1887 | [
"@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.ht... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | cc @thomwolf
`to_csv` supports passing either a file path or a *binary* file object
The writing is batched to avoid loading the whole table in memory | 2021-02-19T09:41:59Z | https://github.com/huggingface/datasets/pull/1887 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1887/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1886/comments | https://api.github.com/repos/huggingface/datasets/issues/1886/timeline | 2021-03-09T18:51:31Z | null | null | MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz | closed | [] | false | 1,886 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"events_url": "https://api.github.com/users/BirgerMoell/events{/privacy}",
"followers_url": "https://api.github.com/users/BirgerMoell/followers",
"following_url": "https://api.github.com/users/BirgerMoell/following{/other_user}",
"gists_ur... | Common voice | https://api.github.com/repos/huggingface/datasets/issues/1886/events | null | https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name} | 2021-02-16T11:16:10Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1886.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1886",
"merged_at": "2021-03-09T18:51:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1886.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 809,221,885 | [] | https://api.github.com/repos/huggingface/datasets/issues/1886 | [
"Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have ... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Started filling out information about the dataset and a dataset card.
To do
Create tagging file
Update the common_voice.py file with more information | 2021-03-09T18:51:31Z | https://github.com/huggingface/datasets/pull/1886 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1886/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1885/comments | https://api.github.com/repos/huggingface/datasets/issues/1885/timeline | 2021-02-16T11:44:12Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz | closed | [] | false | 1,885 | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | add missing info on how to add large files | https://api.github.com/repos/huggingface/datasets/issues/1885/events | null | https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name} | 2021-02-15T23:46:39Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1885.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1885",
"merged_at": "2021-02-16T11:44:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1885.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 808,881,501 | [] | https://api.github.com/repos/huggingface/datasets/issues/1885 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to.
@lhoestq | 2021-02-16T16:22:19Z | https://github.com/huggingface/datasets/pull/1885 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1885/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1884/comments | https://api.github.com/repos/huggingface/datasets/issues/1884/timeline | 2021-07-30T11:01:18Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5 | closed | [] | false | 1,884 | {
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
... | dtype fix when using numpy arrays | https://api.github.com/repos/huggingface/datasets/issues/1884/events | null | https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name} | 2021-02-15T18:55:25Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884"
} | 808,755,894 | [] | https://api.github.com/repos/huggingface/datasets/issues/1884 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array | 2021-07-30T11:01:18Z | https://github.com/huggingface/datasets/pull/1884 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1884/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1883/comments | https://api.github.com/repos/huggingface/datasets/issues/1883/timeline | 2021-02-24T14:53:26Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz | closed | [] | false | 1,883 | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "... | Add not-in-place implementations for several dataset transforms | https://api.github.com/repos/huggingface/datasets/issues/1883/events | null | https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name} | 2021-02-15T18:44:26Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1883.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1883",
"merged_at": "2021-02-24T14:53:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1883.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 808,750,623 | [] | https://api.github.com/repos/huggingface/datasets/issues/1883 | [
"@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)",
"I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.",
"Now let's update the ... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Should we deprecate in-place versions of such methods? | 2021-02-24T14:54:49Z | https://github.com/huggingface/datasets/pull/1883 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1883/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1882/comments | https://api.github.com/repos/huggingface/datasets/issues/1882/timeline | null | null | null | MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw | open | [] | false | 1,882 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Create Remote Manager | https://api.github.com/repos/huggingface/datasets/issues/1882/events | null | https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name} | 2021-02-15T17:36:24Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1882.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1882",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1882.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1882"
} | 808,716,576 | [] | https://api.github.com/repos/huggingface/datasets/issues/1882 | [
"@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_fil... | https://api.github.com/repos/huggingface/datasets | MEMBER | Refactoring to separate the concern of remote (HTTP/FTP requests) management. | 2022-07-06T15:19:47Z | https://github.com/huggingface/datasets/pull/1882 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1881/comments | https://api.github.com/repos/huggingface/datasets/issues/1881/timeline | 2021-02-15T15:09:48Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw | closed | [] | false | 1,881 | {
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"events_url": "https://api.github.com/users/pminervini/events{/privacy}",
"followers_url": "https://api.github.com/users/pminervini/followers",
"following_url": "https://api.github.com/users/pminervini/following{/other_user}",
"gists_url": ... | `list_datasets()` returns a list of strings, not objects | https://api.github.com/repos/huggingface/datasets/issues/1881/events | null | https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name} | 2021-02-15T14:20:15Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1881.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1881",
"merged_at": "2021-02-15T15:09:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1881.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 808,578,200 | [] | https://api.github.com/repos/huggingface/datasets/issues/1881 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Here and there in the docs there is still stuff like this:
```python
>>> datasets_list = list_datasets()
>>> print(', '.join(dataset.id for dataset in datasets_list))
```
However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects. | 2021-02-15T15:09:49Z | https://github.com/huggingface/datasets/pull/1881 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1881/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1880/comments | https://api.github.com/repos/huggingface/datasets/issues/1880/timeline | 2021-02-15T14:18:18Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0 | closed | [] | false | 1,880 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Update multi_woz_v22 checksums | https://api.github.com/repos/huggingface/datasets/issues/1880/events | null | https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name} | 2021-02-15T14:00:18Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1880.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1880",
"merged_at": "2021-02-15T14:18:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1880.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 808,563,439 | [] | https://api.github.com/repos/huggingface/datasets/issues/1880 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | As noticed in #1876 the checksums of this dataset are outdated.
I updated them in this PR | 2021-02-15T14:18:19Z | https://github.com/huggingface/datasets/pull/1880 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1880/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1879/comments | https://api.github.com/repos/huggingface/datasets/issues/1879/timeline | 2021-02-19T18:35:14Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx | closed | [] | false | 1,879 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Replace flatten_nested | https://api.github.com/repos/huggingface/datasets/issues/1879/events | null | https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name} | 2021-02-15T13:29:40Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1879.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1879",
"merged_at": "2021-02-19T18:35:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1879.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 808,541,442 | [] | https://api.github.com/repos/huggingface/datasets/issues/1879 | [
"Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)"
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Replace `flatten_nested` with `NestedDataStructure.flatten`.
This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure.
Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class.
I... | 2021-02-19T18:35:14Z | https://github.com/huggingface/datasets/pull/1879 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1879/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1878/comments | https://api.github.com/repos/huggingface/datasets/issues/1878/timeline | 2021-02-15T14:18:09Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3 | closed | [] | false | 1,878 | {
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"events_url": "https://api.github.com/users/anton-l/events{/privacy}",
"followers_url": "https://api.github.com/users/anton-l/followers",
"following_url": "https://api.github.com/users/anton-l/following{/other_user}",
"gists_url": "https:... | Add LJ Speech dataset | https://api.github.com/repos/huggingface/datasets/issues/1878/events | null | https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name} | 2021-02-15T13:10:42Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1878.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1878",
"merged_at": "2021-02-15T14:18:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1878.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 808,526,883 | [] | https://api.github.com/repos/huggingface/datasets/issues/1878 | [
"Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n... | https://api.github.com/repos/huggingface/datasets | MEMBER | This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/)
As requested by #1841
The ASR format is based on #1767
There are a couple of quirks that should be addressed:
- I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by pape... | 2021-02-15T19:39:41Z | https://github.com/huggingface/datasets/pull/1878 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1878/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1877/comments | https://api.github.com/repos/huggingface/datasets/issues/1877/timeline | 2021-03-26T16:51:58Z | null | completed | MDU6SXNzdWU4MDg0NjIyNzI= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 1,877 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Allow concatenation of both in-memory and on-disk datasets | https://api.github.com/repos/huggingface/datasets/issues/1877/events | null | https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name} | 2021-02-15T11:39:46Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | null | 808,462,272 | [] | https://api.github.com/repos/huggingface/datasets/issues/1877 | [
"I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that conca... | https://api.github.com/repos/huggingface/datasets | MEMBER | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | 2021-03-26T16:51:58Z | https://github.com/huggingface/datasets/issues/1877 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1877/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1876/comments | https://api.github.com/repos/huggingface/datasets/issues/1876/timeline | 2021-08-04T18:08:00Z | null | completed | MDU6SXNzdWU4MDgwMjU4NTk= | closed | [] | null | 1,876 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4",
"events_url": "https://api.github.com/users/Vincent950129/events{/privacy}",
"followers_url": "https://api.github.com/users/Vincent950129/followers",
"following_url": "https://api.github.com/users/Vincent950129/following{/other_user}",
"gi... | load_dataset("multi_woz_v22") NonMatchingChecksumError | https://api.github.com/repos/huggingface/datasets/issues/1876/events | null | https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name} | 2021-02-14T19:14:48Z | null | false | null | null | 808,025,859 | [] | https://api.github.com/repos/huggingface/datasets/issues/1876 | [
"Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.",
"I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll ... | https://api.github.com/repos/huggingface/datasets | NONE | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.N... | 2021-08-04T18:08:00Z | https://github.com/huggingface/datasets/issues/1876 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1876/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1875/comments | https://api.github.com/repos/huggingface/datasets/issues/1875/timeline | 2021-02-17T15:56:27Z | null | null | MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0 | closed | [] | false | 1,875 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4",
"events_url": "https://api.github.com/users/ddhruvkr/events{/privacy}",
"followers_url": "https://api.github.com/users/ddhruvkr/followers",
"following_url": "https://api.github.com/users/ddhruvkr/following{/other_user}",
"gists_url": "http... | Adding sari metric | https://api.github.com/repos/huggingface/datasets/issues/1875/events | null | https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name} | 2021-02-14T04:38:35Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1875.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1875",
"merged_at": "2021-02-17T15:56:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1875.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,887,267 | [] | https://api.github.com/repos/huggingface/datasets/issues/1875 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark. | 2021-02-17T15:56:27Z | https://github.com/huggingface/datasets/pull/1875 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1874/comments | https://api.github.com/repos/huggingface/datasets/issues/1874/timeline | 2021-03-04T10:38:22Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy | closed | [] | false | 1,874 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"events_url": "https://api.github.com/users/lucadiliello/events{/privacy}",
"followers_url": "https://api.github.com/users/lucadiliello/followers",
"following_url": "https://api.github.com/users/lucadiliello/following{/other_user}",
"gist... | Adding Europarl Bilingual dataset | https://api.github.com/repos/huggingface/datasets/issues/1874/events | null | https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name} | 2021-02-13T17:02:04Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1874",
"merged_at": "2021-03-04T10:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,786,094 | [] | https://api.github.com/repos/huggingface/datasets/issues/1874 | [
"is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.",
"I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos",
"I... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php).
This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some ke... | 2021-03-04T10:38:22Z | https://github.com/huggingface/datasets/pull/1874 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1873/comments | https://api.github.com/repos/huggingface/datasets/issues/1873/timeline | 2021-02-16T14:21:58Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy | closed | [] | false | 1,873 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"events_url": "https://api.github.com/users/cstorm125/events{/privacy}",
"followers_url": "https://api.github.com/users/cstorm125/followers",
"following_url": "https://api.github.com/users/cstorm125/following{/other_user}",
"gists_url": "... | add iapp_wiki_qa_squad | https://api.github.com/repos/huggingface/datasets/issues/1873/events | null | https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name} | 2021-02-13T13:34:27Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1873.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1873",
"merged_at": "2021-02-16T14:21:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1873.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,750,745 | [] | https://api.github.com/repos/huggingface/datasets/issues/1873 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.
It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)
to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in
5761/742/739 questions from 1529/... | 2021-02-16T14:21:58Z | https://github.com/huggingface/datasets/pull/1873 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1873/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1872/comments | https://api.github.com/repos/huggingface/datasets/issues/1872/timeline | 2021-03-30T14:01:45Z | null | completed | MDU6SXNzdWU4MDc3MTE5MzU= | closed | [] | null | 1,872 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https:/... | Adding a new column to the dataset after set_format was called | https://api.github.com/repos/huggingface/datasets/issues/1872/events | null | https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name} | 2021-02-13T09:14:35Z | null | false | null | null | 807,711,935 | [] | https://api.github.com/repos/huggingface/datasets/issues/1872 | [
"Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column ... | https://api.github.com/repos/huggingface/datasets | NONE | Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"... | 2021-03-30T14:01:45Z | https://github.com/huggingface/datasets/issues/1872 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1872/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1871/comments | https://api.github.com/repos/huggingface/datasets/issues/1871/timeline | 2021-03-08T10:12:45Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz | closed | [] | false | 1,871 | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https... | Add newspop dataset | https://api.github.com/repos/huggingface/datasets/issues/1871/events | null | https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name} | 2021-02-13T07:31:23Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1871",
"merged_at": "2021-03-08T10:12:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,697,671 | [] | https://api.github.com/repos/huggingface/datasets/issues/1871 | [
"Thanks for the changes :)\r\nmerging"
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-03-08T10:12:45Z | https://github.com/huggingface/datasets/pull/1871 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1870/comments | https://api.github.com/repos/huggingface/datasets/issues/1870/timeline | 2021-04-23T10:01:31Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4 | closed | [] | false | 1,870 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Implement Dataset add_item | https://api.github.com/repos/huggingface/datasets/issues/1870/events | {
"closed_at": "2021-05-31T16:20:53Z",
"closed_issues": 3,
"created_at": "2021-04-09T13:16:31Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/u... | https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name} | 2021-02-12T15:03:46Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1870.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1870",
"merged_at": "2021-04-23T10:01:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1870.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,306,564 | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1870 | [
"Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.",
"Sure ! I opened an issue #1877 so we can discuss this specific aspect :)",
"I am going to implement this consolidation step ... | https://api.github.com/repos/huggingface/datasets | MEMBER | Implement `Dataset.add_item`.
Close #1854. | 2021-04-23T10:01:31Z | https://github.com/huggingface/datasets/pull/1870 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1870/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1869/comments | https://api.github.com/repos/huggingface/datasets/issues/1869/timeline | 2021-02-12T16:13:08Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy | closed | [] | false | 1,869 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Remove outdated commands in favor of huggingface-cli | https://api.github.com/repos/huggingface/datasets/issues/1869/events | null | https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name} | 2021-02-12T11:28:10Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1869",
"merged_at": "2021-02-12T16:13:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,159,835 | [] | https://api.github.com/repos/huggingface/datasets/issues/1869 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | Removing the old user commands since `huggingface_hub` is going to be used instead.
cc @julien-c | 2021-02-12T16:13:09Z | https://github.com/huggingface/datasets/pull/1869 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1869/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1868/comments | https://api.github.com/repos/huggingface/datasets/issues/1868/timeline | 2021-02-12T11:03:06Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0 | closed | [] | false | 1,868 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Update oscar sizes | https://api.github.com/repos/huggingface/datasets/issues/1868/events | null | https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name} | 2021-02-12T10:55:35Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1868.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1868",
"merged_at": "2021-02-12T11:03:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1868.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,138,159 | [] | https://api.github.com/repos/huggingface/datasets/issues/1868 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan | 2021-02-12T11:03:07Z | https://github.com/huggingface/datasets/pull/1868 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1868/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1867/comments | https://api.github.com/repos/huggingface/datasets/issues/1867/timeline | 2021-02-24T12:00:43Z | null | completed | MDU6SXNzdWU4MDcxMjcxODE= | closed | [] | null | 1,867 | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_u... | ERROR WHEN USING SET_TRANSFORM() | https://api.github.com/repos/huggingface/datasets/issues/1867/events | null | https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name} | 2021-02-12T10:38:31Z | null | false | null | null | 807,127,181 | [] | https://api.github.com/repos/huggingface/datasets/issues/1867 | [
"Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/... | https://api.github.com/repos/huggingface/datasets | NONE | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | 2021-03-01T14:04:24Z | https://github.com/huggingface/datasets/issues/1867 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1867/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1866/comments | https://api.github.com/repos/huggingface/datasets/issues/1866/timeline | 2021-02-17T14:22:36Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1 | closed | [] | false | 1,866 | {
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"events_url": "https://api.github.com/users/frankier/events{/privacy}",
"followers_url": "https://api.github.com/users/frankier/followers",
"following_url": "https://api.github.com/users/frankier/following{/other_user}",
"gists_url": "https... | Add dataset for Financial PhraseBank | https://api.github.com/repos/huggingface/datasets/issues/1866/events | null | https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name} | 2021-02-12T07:30:56Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1866",
"merged_at": "2021-02-17T14:22:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 807,017,816 | [] | https://api.github.com/repos/huggingface/datasets/issues/1866 | [
"Thanks for the feedback. All accepted and metadata regenerated."
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-02-17T14:22:36Z | https://github.com/huggingface/datasets/pull/1866 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1866/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1865/comments | https://api.github.com/repos/huggingface/datasets/issues/1865/timeline | 2021-02-12T16:59:44Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2 | closed | [] | false | 1,865 | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "htt... | Updated OPUS Open Subtitles Dataset with metadata information | https://api.github.com/repos/huggingface/datasets/issues/1865/events | null | https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name} | 2021-02-11T13:26:26Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1865.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1865",
"merged_at": "2021-02-12T16:59:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1865.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 806,388,290 | [] | https://api.github.com/repos/huggingface/datasets/issues/1865 | [
"Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of th... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Close #1844
Problems:
- I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be?
- Possibly related to the above, I tried doing `pip uninst... | 2021-02-19T12:38:09Z | https://github.com/huggingface/datasets/pull/1865 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1865/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1864/comments | https://api.github.com/repos/huggingface/datasets/issues/1864/timeline | 2021-02-11T08:19:51Z | null | completed | MDU6SXNzdWU4MDYxNzI4NDM= | closed | [] | null | 1,864 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url"... | Add Winogender Schemas | https://api.github.com/repos/huggingface/datasets/issues/1864/events | null | https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name} | 2021-02-11T08:18:38Z | null | false | null | null | 806,172,843 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1864 | [
"Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias"
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** Winogender Schemas
- **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems.
- **Paper... | 2021-02-11T08:19:51Z | https://github.com/huggingface/datasets/issues/1864 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1864/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1863/comments | https://api.github.com/repos/huggingface/datasets/issues/1863/timeline | null | null | null | MDU6SXNzdWU4MDYxNzEzMTE= | open | [] | null | 1,863 | {
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url"... | Add WikiCREM | https://api.github.com/repos/huggingface/datasets/issues/1863/events | null | https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name} | 2021-02-11T08:16:00Z | null | false | null | null | 806,171,311 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1863 | [
"Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!",
"Hi @udapy, are you working on this?"
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** WikiCREM
- **Description:** A large unsupervised corpus for coreference resolution.
- **Paper:** https://arxiv.org/abs/1905.06290
- **Github repo:**: https://github.com/vid-koci/bert-commonsense
- **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3
- **... | 2021-03-07T07:27:13Z | https://github.com/huggingface/datasets/issues/1863 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1863/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1862/comments | https://api.github.com/repos/huggingface/datasets/issues/1862/timeline | 2021-02-10T18:17:47Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx | closed | [] | false | 1,862 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix writing GPU Faiss index | https://api.github.com/repos/huggingface/datasets/issues/1862/events | null | https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name} | 2021-02-10T17:32:03Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1862.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1862",
"merged_at": "2021-02-10T18:17:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1862.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 805,722,293 | [] | https://api.github.com/repos/huggingface/datasets/issues/1862 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU.
I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu`
Close #1859 | 2021-02-10T18:17:48Z | https://github.com/huggingface/datasets/pull/1862 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1861/comments | https://api.github.com/repos/huggingface/datasets/issues/1861/timeline | 2021-02-10T16:14:59Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1 | closed | [] | false | 1,861 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Fix Limit url | https://api.github.com/repos/huggingface/datasets/issues/1861/events | null | https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name} | 2021-02-10T15:44:56Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1861.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1861",
"merged_at": "2021-02-10T16:14:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1861.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 805,631,215 | [] | https://api.github.com/repos/huggingface/datasets/issues/1861 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset
This PR uses the previous commit sha to download the file instead, as suggested by @Paethon
Close #1836 | 2021-02-10T16:15:00Z | https://github.com/huggingface/datasets/pull/1861 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1861/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1860/comments | https://api.github.com/repos/huggingface/datasets/issues/1860/timeline | 2021-02-12T19:13:29Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz | closed | [] | false | 1,860 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Add loading from the Datasets Hub + add relative paths in download manager | https://api.github.com/repos/huggingface/datasets/issues/1860/events | null | https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name} | 2021-02-10T13:24:11Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1860.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1860",
"merged_at": "2021-02-12T19:13:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1860.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 805,510,037 | [] | https://api.github.com/repos/huggingface/datasets/issues/1860 | [
"I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documen... | https://api.github.com/repos/huggingface/datasets | MEMBER | With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data.
For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files.
You can load it using
```python
from datasets import load_dataset
d = load_data... | 2021-02-12T19:13:30Z | https://github.com/huggingface/datasets/pull/1860 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1860/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1859/comments | https://api.github.com/repos/huggingface/datasets/issues/1859/timeline | 2021-02-10T18:17:47Z | null | completed | MDU6SXNzdWU4MDU0NzkwMjU= | closed | [] | null | 1,859 | {
"avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4",
"events_url": "https://api.github.com/users/corticalstack/events{/privacy}",
"followers_url": "https://api.github.com/users/corticalstack/followers",
"following_url": "https://api.github.com/users/corticalstack/following{/other_user}",
"gi... | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | https://api.github.com/repos/huggingface/datasets/issues/1859/events | null | https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name} | 2021-02-10T12:41:00Z | null | false | null | null | 805,479,025 | [] | https://api.github.com/repos/huggingface/datasets/issues/1859 | [
"Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR",
"I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next... | https://api.github.com/repos/huggingface/datasets | NONE | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_availabl... | 2021-02-10T18:32:12Z | https://github.com/huggingface/datasets/issues/1859 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1858/comments | https://api.github.com/repos/huggingface/datasets/issues/1858/timeline | 2021-02-10T15:52:29Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx | closed | [] | false | 1,858 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Clean config getenvs | https://api.github.com/repos/huggingface/datasets/issues/1858/events | null | https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name} | 2021-02-10T12:39:14Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1858.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1858",
"merged_at": "2021-02-10T15:52:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1858.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 805,477,774 | [] | https://api.github.com/repos/huggingface/datasets/issues/1858 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | Following #1848
Remove double getenv calls and fix one issue with rarfile
cc @albertvillanova | 2021-02-10T15:52:30Z | https://github.com/huggingface/datasets/pull/1858 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1858/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1857/comments | https://api.github.com/repos/huggingface/datasets/issues/1857/timeline | 2021-08-03T05:06:13Z | null | completed | MDU6SXNzdWU4MDUzOTExMDc= | closed | [] | null | 1,857 | {
"avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4",
"events_url": "https://api.github.com/users/mwrzalik/events{/privacy}",
"followers_url": "https://api.github.com/users/mwrzalik/followers",
"following_url": "https://api.github.com/users/mwrzalik/following{/other_user}",
"gists_url": "http... | Unable to upload "community provided" dataset - 400 Client Error | https://api.github.com/repos/huggingface/datasets/issues/1857/events | null | https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name} | 2021-02-10T10:39:01Z | null | false | null | null | 805,391,107 | [] | https://api.github.com/repos/huggingface/datasets/issues/1857 | [
"Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c ma... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens:
```
$ datasets-cli login
$ datasets-cli upload_dataset my_dataset
About to upload file /path/to/my_dataset/dataset_infos.json to S3... | 2021-08-03T05:06:13Z | https://github.com/huggingface/datasets/issues/1857 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1857/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1856/comments | https://api.github.com/repos/huggingface/datasets/issues/1856/timeline | 2022-03-15T13:55:23Z | null | completed | MDU6SXNzdWU4MDUzNjAyMDA= | closed | [] | null | 1,856 | {
"avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4",
"events_url": "https://api.github.com/users/yanxi0830/events{/privacy}",
"followers_url": "https://api.github.com/users/yanxi0830/followers",
"following_url": "https://api.github.com/users/yanxi0830/following{/other_user}",
"gists_url": "... | load_dataset("amazon_polarity") NonMatchingChecksumError | https://api.github.com/repos/huggingface/datasets/issues/1856/events | null | https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name} | 2021-02-10T10:00:56Z | null | false | null | null | 805,360,200 | [] | https://api.github.com/repos/huggingface/datasets/issues/1856 | [
"Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`",
"+1 encountering this issue as well",
"@l... | https://api.github.com/repos/huggingface/datasets | NONE | Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError.
To reproduce:
```
load_dataset("amazon_polarity")
```
This will give the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback ... | 2022-03-15T13:55:24Z | https://github.com/huggingface/datasets/issues/1856 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1856/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1855/comments | https://api.github.com/repos/huggingface/datasets/issues/1855/timeline | 2021-02-10T12:33:09Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3 | closed | [] | false | 1,855 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Minor fix in the docs | https://api.github.com/repos/huggingface/datasets/issues/1855/events | null | https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name} | 2021-02-10T07:27:43Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1855.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1855",
"merged_at": "2021-02-10T12:33:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1855.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 805,256,579 | [] | https://api.github.com/repos/huggingface/datasets/issues/1855 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | 2021-02-10T12:33:09Z | https://github.com/huggingface/datasets/pull/1855 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1855/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1854/comments | https://api.github.com/repos/huggingface/datasets/issues/1854/timeline | 2021-04-23T10:01:30Z | null | completed | MDU6SXNzdWU4MDUyMDQzOTc= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | 1,854 | {
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"events_url": "https://api.github.com/users/sshleifer/events{/privacy}",
"followers_url": "https://api.github.com/users/sshleifer/followers",
"following_url": "https://api.github.com/users/sshleifer/following{/other_user}",
"gists_url": "h... | Feature Request: Dataset.add_item | https://api.github.com/repos/huggingface/datasets/issues/1854/events | null | https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name} | 2021-02-10T06:06:00Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | null | 805,204,397 | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1854 | [
"Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\... | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m... | 2021-04-23T10:01:30Z | https://github.com/huggingface/datasets/issues/1854 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1854/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1853/comments | https://api.github.com/repos/huggingface/datasets/issues/1853/timeline | 2021-02-10T12:32:34Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4 | closed | [] | false | 1,853 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Configure library root logger at the module level | https://api.github.com/repos/huggingface/datasets/issues/1853/events | null | https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name} | 2021-02-09T18:11:12Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1853",
"merged_at": "2021-02-10T12:32:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 804,791,166 | [] | https://api.github.com/repos/huggingface/datasets/issues/1853 | [] | https://api.github.com/repos/huggingface/datasets | MEMBER | Configure library root logger at the datasets.logging module level (singleton-like).
By doing it this way:
- we are sure configuration is done only once: module level code is only runned once
- no need of global variable
- no need of threading lock | 2021-02-10T12:32:34Z | https://github.com/huggingface/datasets/pull/1853 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1853/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1852/comments | https://api.github.com/repos/huggingface/datasets/issues/1852/timeline | 2021-02-11T10:18:55Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1 | closed | [] | false | 1,852 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gist... | Add Arabic Speech Corpus | https://api.github.com/repos/huggingface/datasets/issues/1852/events | null | https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name} | 2021-02-09T15:02:26Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1852",
"merged_at": "2021-02-11T10:18:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 804,633,033 | [] | https://api.github.com/repos/huggingface/datasets/issues/1852 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-02-11T10:18:55Z | https://github.com/huggingface/datasets/pull/1852 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1852/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1851/comments | https://api.github.com/repos/huggingface/datasets/issues/1851/timeline | 2021-02-09T14:21:48Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5 | closed | [] | false | 1,851 | {
"avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4",
"events_url": "https://api.github.com/users/pvl/events{/privacy}",
"followers_url": "https://api.github.com/users/pvl/followers",
"following_url": "https://api.github.com/users/pvl/following{/other_user}",
"gists_url": "https://api.github.com... | set bert_score version dependency | https://api.github.com/repos/huggingface/datasets/issues/1851/events | null | https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name} | 2021-02-09T12:51:07Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1851.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1851",
"merged_at": "2021-02-09T14:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1851.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 804,523,174 | [] | https://api.github.com/repos/huggingface/datasets/issues/1851 | [] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843) | 2021-02-09T14:21:48Z | https://github.com/huggingface/datasets/pull/1851 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1851/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1850/comments | https://api.github.com/repos/huggingface/datasets/issues/1850/timeline | 2021-02-09T15:16:26Z | null | null | MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx | closed | [] | false | 1,850 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"events_url": "https://api.github.com/users/ggdupont/events{/privacy}",
"followers_url": "https://api.github.com/users/ggdupont/followers",
"following_url": "https://api.github.com/users/ggdupont/following{/other_user}",
"gists_url": "http... | Add cord 19 dataset | https://api.github.com/repos/huggingface/datasets/issues/1850/events | null | https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name} | 2021-02-09T10:22:08Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1850.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1850",
"merged_at": "2021-02-09T15:16:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1850.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 804,412,249 | [] | https://api.github.com/repos/huggingface/datasets/issues/1850 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIG... | 2021-02-09T15:16:26Z | https://github.com/huggingface/datasets/pull/1850 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1850/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1849/comments | https://api.github.com/repos/huggingface/datasets/issues/1849/timeline | 2021-03-15T05:59:37Z | null | completed | MDU6SXNzdWU4MDQyOTI5NzE= | closed | [] | null | 1,849 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add TIMIT | https://api.github.com/repos/huggingface/datasets/issues/1849/events | null | https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name} | 2021-02-09T07:29:41Z | null | false | null | null | 804,292,971 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1849 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *TIMIT*
- **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk... | 2021-03-15T05:59:37Z | https://github.com/huggingface/datasets/issues/1849 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1849/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1848/comments | https://api.github.com/repos/huggingface/datasets/issues/1848/timeline | 2021-02-10T12:29:35Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1 | closed | [] | false | 1,848 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Refactoring: Create config module | https://api.github.com/repos/huggingface/datasets/issues/1848/events | null | https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name} | 2021-02-08T18:43:51Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1848.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1848",
"merged_at": "2021-02-10T12:29:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1848.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 803,826,506 | [] | https://api.github.com/repos/huggingface/datasets/issues/1848 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Refactorize configuration settings into their own module.
This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created. | 2021-02-10T12:29:35Z | https://github.com/huggingface/datasets/pull/1848 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1848/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1847/comments | https://api.github.com/repos/huggingface/datasets/issues/1847/timeline | 2021-02-09T17:53:21Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0 | closed | [] | false | 1,847 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | [Metrics] Add word error metric metric | https://api.github.com/repos/huggingface/datasets/issues/1847/events | null | https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name} | 2021-02-08T18:41:15Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1847.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1847",
"merged_at": "2021-02-09T17:53:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1847.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 803,824,694 | [] | https://api.github.com/repos/huggingface/datasets/issues/1847 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | This PR adds the word error rate metric to datasets.
WER: https://en.wikipedia.org/wiki/Word_error_rate
for speech recognition. WER is the main metric used in ASR.
`jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939) | 2021-02-09T17:53:21Z | https://github.com/huggingface/datasets/pull/1847 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1847/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1846/comments | https://api.github.com/repos/huggingface/datasets/issues/1846/timeline | 2021-02-25T14:10:18Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy | closed | [] | false | 1,846 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Make DownloadManager downloaded/extracted paths accessible | https://api.github.com/repos/huggingface/datasets/issues/1846/events | null | https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name} | 2021-02-08T18:14:42Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1846.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1846",
"merged_at": "2021-02-25T14:10:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1846.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 803,806,380 | [] | https://api.github.com/repos/huggingface/datasets/issues/1846 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Make accessible the file paths downloaded/extracted by DownloadManager.
Close #1831.
The approach:
- I set these paths as DownloadManager attributes: these are DownloadManager's concerns
- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition | 2021-02-25T14:10:18Z | https://github.com/huggingface/datasets/pull/1846 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1846/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1845/comments | https://api.github.com/repos/huggingface/datasets/issues/1845/timeline | 2021-02-09T14:22:37Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz | closed | [] | false | 1,845 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Enable logging propagation and remove logging handler | https://api.github.com/repos/huggingface/datasets/issues/1845/events | null | https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name} | 2021-02-08T16:22:13Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1845",
"merged_at": "2021-02-09T14:22:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 803,714,493 | [] | https://api.github.com/repos/huggingface/datasets/issues/1845 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691
But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826
I also re... | 2021-02-09T14:22:38Z | https://github.com/huggingface/datasets/pull/1845 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1844/comments | https://api.github.com/repos/huggingface/datasets/issues/1844/timeline | 2021-02-12T17:38:58Z | null | completed | MDU6SXNzdWU4MDM1ODgxMjU= | closed | [] | null | 1,844 | {
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"events_url": "https://api.github.com/users/Valahaar/events{/privacy}",
"followers_url": "https://api.github.com/users/Valahaar/followers",
"following_url": "https://api.github.com/users/Valahaar/following{/other_user}",
"gists_url": "htt... | Update Open Subtitles corpus with original sentence IDs | https://api.github.com/repos/huggingface/datasets/issues/1844/events | null | https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name} | 2021-02-08T13:55:13Z | null | false | null | null | 803,588,125 | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1844 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles).
I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a... | 2021-02-12T17:38:58Z | https://github.com/huggingface/datasets/issues/1844 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1844/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1843/comments | https://api.github.com/repos/huggingface/datasets/issues/1843/timeline | null | null | null | MDU6SXNzdWU4MDM1NjUzOTM= | open | [] | null | 1,843 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | MustC Speech Translation | https://api.github.com/repos/huggingface/datasets/issues/1843/events | null | https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name} | 2021-02-08T13:27:45Z | null | false | null | null | 803,565,393 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1843 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *IWSLT19*
- **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.*
- **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation*
- **Data:** *https://sites.google.com/view/iwslt-evaluation-2... | 2021-05-14T14:53:34Z | https://github.com/huggingface/datasets/issues/1843 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1843/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1842/comments | https://api.github.com/repos/huggingface/datasets/issues/1842/timeline | 2023-02-28T16:29:22Z | null | completed | MDU6SXNzdWU4MDM1NjMxNDk= | closed | [] | null | 1,842 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add AMI Corpus | https://api.github.com/repos/huggingface/datasets/issues/1842/events | null | https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name} | 2021-02-08T13:25:00Z | null | false | null | null | 803,563,149 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1842 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *AMI*
- **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elic... | 2023-02-28T16:29:22Z | https://github.com/huggingface/datasets/issues/1842 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1842/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1841/comments | https://api.github.com/repos/huggingface/datasets/issues/1841/timeline | 2021-03-15T05:59:02Z | null | completed | MDU6SXNzdWU4MDM1NjExMjM= | closed | [] | null | 1,841 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add ljspeech | https://api.github.com/repos/huggingface/datasets/issues/1841/events | null | https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name} | 2021-02-08T13:22:26Z | null | false | null | null | 803,561,123 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1841 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *ljspeech*
- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of ap... | 2021-03-15T05:59:02Z | https://github.com/huggingface/datasets/issues/1841 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1841/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | 2021-03-15T05:56:21Z | null | completed | MDU6SXNzdWU4MDM1NjAwMzk= | closed | [] | null | 1,840 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add common voice | https://api.github.com/repos/huggingface/datasets/issues/1840/events | null | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | 2021-02-08T13:21:05Z | null | false | null | null | 803,560,039 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1840 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/dat... | 2022-03-20T15:23:40Z | https://github.com/huggingface/datasets/issues/1840 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1840/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1839/comments | https://api.github.com/repos/huggingface/datasets/issues/1839/timeline | null | null | null | MDU6SXNzdWU4MDM1NTkxNjQ= | open | [] | null | 1,839 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add Voxforge | https://api.github.com/repos/huggingface/datasets/issues/1839/events | null | https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name} | 2021-02-08T13:19:56Z | null | false | null | null | 803,559,164 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1839 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *voxforge*
- **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constant... | 2021-02-08T13:28:31Z | https://github.com/huggingface/datasets/issues/1839 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1839/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1838/comments | https://api.github.com/repos/huggingface/datasets/issues/1838/timeline | 2022-10-04T14:34:12Z | null | completed | MDU6SXNzdWU4MDM1NTc1MjE= | closed | [] | null | 1,838 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add tedlium | https://api.github.com/repos/huggingface/datasets/issues/1838/events | null | https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name} | 2021-02-08T13:17:52Z | null | false | null | null | 803,557,521 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1838 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51... | 2022-10-04T14:34:12Z | https://github.com/huggingface/datasets/issues/1838 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1838/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1837/comments | https://api.github.com/repos/huggingface/datasets/issues/1837/timeline | 2021-12-28T15:05:08Z | null | completed | MDU6SXNzdWU4MDM1NTU2NTA= | closed | [] | null | 1,837 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add VCTK | https://api.github.com/repos/huggingface/datasets/issues/1837/events | null | https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name} | 2021-02-08T13:15:28Z | null | false | null | null | 803,555,650 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1837 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch... | 2021-12-28T15:05:08Z | https://github.com/huggingface/datasets/issues/1837 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1837/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1836/comments | https://api.github.com/repos/huggingface/datasets/issues/1836/timeline | 2021-02-10T16:14:58Z | null | completed | MDU6SXNzdWU4MDM1MzE4Mzc= | closed | [] | null | 1,836 | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://... | test.json has been removed from the limit dataset repo (breaks dataset) | https://api.github.com/repos/huggingface/datasets/issues/1836/events | null | https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name} | 2021-02-08T12:45:53Z | null | false | null | null | 803,531,837 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | https://api.github.com/repos/huggingface/datasets/issues/1836 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51
The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:
`https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd... | 2021-02-10T16:14:58Z | https://github.com/huggingface/datasets/issues/1836 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1836/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1835/comments | https://api.github.com/repos/huggingface/datasets/issues/1835/timeline | null | null | null | MDU6SXNzdWU4MDM1MjQ3OTA= | open | [] | null | 1,835 | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_use... | Add CHiME4 dataset | https://api.github.com/repos/huggingface/datasets/issues/1835/events | null | https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name} | 2021-02-08T12:36:38Z | null | false | null | null | 803,524,790 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "d93f0b",... | https://api.github.com/repos/huggingface/datasets/issues/1835 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Adding a Dataset
- **Name:** Chime4
- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR
- **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape... | 2024-02-01T10:25:03Z | https://github.com/huggingface/datasets/issues/1835 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1835/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | 2021-02-08T12:42:50Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | closed | [] | false | 1,834 | {
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"events_url": "https://api.github.com/users/Paethon/events{/privacy}",
"followers_url": "https://api.github.com/users/Paethon/followers",
"following_url": "https://api.github.com/users/Paethon/following{/other_user}",
"gists_url": "https://... | Fixes base_url of limit dataset | https://api.github.com/repos/huggingface/datasets/issues/1834/events | null | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | 2021-02-08T12:26:35Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834"
} | 803,517,094 | [] | https://api.github.com/repos/huggingface/datasets/issues/1834 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | 2021-02-08T12:42:50Z | https://github.com/huggingface/datasets/pull/1834 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1834/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1833/comments | https://api.github.com/repos/huggingface/datasets/issues/1833/timeline | 2021-02-12T14:08:24Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx | closed | [] | false | 1,833 | {
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"events_url": "https://api.github.com/users/pjox/events{/privacy}",
"followers_url": "https://api.github.com/users/pjox/followers",
"following_url": "https://api.github.com/users/pjox/following{/other_user}",
"gists_url": "https://api.githu... | Add OSCAR dataset card | https://api.github.com/repos/huggingface/datasets/issues/1833/events | null | https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name} | 2021-02-08T01:39:49Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1833.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1833",
"merged_at": "2021-02-12T14:08:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1833.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 803,120,978 | [] | https://api.github.com/repos/huggingface/datasets/issues/1833 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824). | 2021-02-12T14:09:25Z | https://github.com/huggingface/datasets/pull/1833 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1833/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1832/comments | https://api.github.com/repos/huggingface/datasets/issues/1832/timeline | 2021-02-08T17:27:29Z | null | completed | MDU6SXNzdWU4MDI4ODA4OTc= | closed | [] | null | 1,832 | {
"avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4",
"events_url": "https://api.github.com/users/JimmyJim1/events{/privacy}",
"followers_url": "https://api.github.com/users/JimmyJim1/followers",
"following_url": "https://api.github.com/users/JimmyJim1/following{/other_user}",
"gists_url": "... | Looks like nokogumbo is up-to-date now, so this is no longer needed. | https://api.github.com/repos/huggingface/datasets/issues/1832/events | null | https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name} | 2021-02-07T06:52:07Z | null | false | null | null | 802,880,897 | [] | https://api.github.com/repos/huggingface/datasets/issues/1832 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Looks like nokogumbo is up-to-date now, so this is no longer needed.
__Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__ | 2021-02-08T17:27:29Z | https://github.com/huggingface/datasets/issues/1832 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1832/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1831/comments | https://api.github.com/repos/huggingface/datasets/issues/1831/timeline | 2021-02-25T14:10:18Z | null | completed | MDU6SXNzdWU4MDI4Njg4NTQ= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | 1,831 | {
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"events_url": "https://api.github.com/users/svjack/events{/privacy}",
"followers_url": "https://api.github.com/users/svjack/followers",
"following_url": "https://api.github.com/users/svjack/following{/other_user}",
"gists_url": "https://a... | Some question about raw dataset download info in the project . | https://api.github.com/repos/huggingface/datasets/issues/1831/events | null | https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name} | 2021-02-07T05:33:36Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | null | 802,868,854 | [] | https://api.github.com/repos/huggingface/datasets/issues/1831 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi , i review the code in
https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py
in the _split_generators function is the truly logic of download raw datasets with dl_manager
and use Conll2003 cls by use import_main_class in load_dataset function
My question is that , with this logic i... | 2021-02-25T14:10:18Z | https://github.com/huggingface/datasets/issues/1831 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1830/comments | https://api.github.com/repos/huggingface/datasets/issues/1830/timeline | null | null | null | MDU6SXNzdWU4MDI3OTAwNzU= | open | [] | null | 1,830 | {
"avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4",
"events_url": "https://api.github.com/users/wumpusman/events{/privacy}",
"followers_url": "https://api.github.com/users/wumpusman/followers",
"following_url": "https://api.github.com/users/wumpusman/following{/other_user}",
"gists_url": "h... | using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? | https://api.github.com/repos/huggingface/datasets/issues/1830/events | null | https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name} | 2021-02-06T21:00:26Z | null | false | null | null | 802,790,075 | [] | https://api.github.com/repos/huggingface/datasets/issues/1830 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower:
````
def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"):
words_u... | 2021-02-24T21:56:14Z | https://github.com/huggingface/datasets/issues/1830 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1830/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1829/comments | https://api.github.com/repos/huggingface/datasets/issues/1829/timeline | 2021-02-08T13:17:53Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY4NzgzNjA5 | closed | [] | false | 1,829 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add Tweet Eval Dataset | https://api.github.com/repos/huggingface/datasets/issues/1829/events | null | https://api.github.com/repos/huggingface/datasets/issues/1829/labels{/name} | 2021-02-06T12:36:25Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1829.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1829",
"merged_at": "2021-02-08T13:17:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1829.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 802,693,600 | [] | https://api.github.com/repos/huggingface/datasets/issues/1829 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Closes Draft PR #1407.
Notes:
1. I have excluded `mapping.txt` from the dataset at it only contained the name mappings, which are already present in the ClassLabels.
2. I have also exluded the textual names for the emojis mentioned in the [mapping](https://github.com/cardiffnlp/tweeteval/blob/main/datasets/emoji/... | 2021-02-08T13:17:54Z | https://github.com/huggingface/datasets/pull/1829 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1829/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1828/comments | https://api.github.com/repos/huggingface/datasets/issues/1828/timeline | 2021-02-18T14:17:07Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2 | closed | [] | true | 1,828 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add CelebA Dataset | https://api.github.com/repos/huggingface/datasets/issues/1828/events | null | https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name} | 2021-02-05T20:20:55Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1828"
} | 802,449,234 | [] | https://api.github.com/repos/huggingface/datasets/issues/1828 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Trying to add CelebA Dataset.
Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.
Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]... | 2021-02-18T14:17:07Z | https://github.com/huggingface/datasets/pull/1828 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1827/comments | https://api.github.com/repos/huggingface/datasets/issues/1827/timeline | 2021-02-18T13:55:16Z | null | completed | MDU6SXNzdWU4MDIzNTM5NzQ= | closed | [] | null | 1,827 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Regarding On-the-fly Data Loading | https://api.github.com/repos/huggingface/datasets/issues/1827/events | null | https://api.github.com/repos/huggingface/datasets/issues/1827/labels{/name} | 2021-02-05T17:43:48Z | null | false | null | null | 802,353,974 | [] | https://api.github.com/repos/huggingface/datasets/issues/1827 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
I was wondering if it is possible to load images/texts as a batch during the training process, without loading the entire dataset on the RAM at any given point.
Thanks,
Gunjan | 2021-02-18T13:55:16Z | https://github.com/huggingface/datasets/issues/1827 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1827/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1826/comments | https://api.github.com/repos/huggingface/datasets/issues/1826/timeline | 2021-02-09T17:39:27Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2 | closed | [] | false | 1,826 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | Print error message with filename when malformed CSV | https://api.github.com/repos/huggingface/datasets/issues/1826/events | null | https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name} | 2021-02-05T11:07:59Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1826",
"merged_at": "2021-02-09T17:39:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 802,074,744 | [] | https://api.github.com/repos/huggingface/datasets/issues/1826 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Print error message specifying filename when malformed CSV file.
Close #1821 | 2021-02-09T17:39:27Z | https://github.com/huggingface/datasets/pull/1826 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1825/comments | https://api.github.com/repos/huggingface/datasets/issues/1825/timeline | 2021-03-16T09:44:00Z | null | completed | MDU6SXNzdWU4MDIwNzM5MjU= | closed | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | 1,825 | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_u... | Datasets library not suitable for huge text datasets. | https://api.github.com/repos/huggingface/datasets/issues/1825/events | null | https://api.github.com/repos/huggingface/datasets/issues/1825/labels{/name} | 2021-02-05T11:06:50Z | null | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",... | null | 802,073,925 | [] | https://api.github.com/repos/huggingface/datasets/issues/1825 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Hi,
I'm trying to use datasets library to load a 187GB dataset of pure text, with the intention of building a Language Model. The problem is that from the 187GB it goes to some TB when processed by Datasets. First of all, I think the pre-tokenizing step (with tokenizer.map()) is not really thought for datasets this ... | 2021-03-30T14:04:01Z | https://github.com/huggingface/datasets/issues/1825 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1825/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1824/comments | https://api.github.com/repos/huggingface/datasets/issues/1824/timeline | 2021-02-08T11:30:33Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY4MjU3MTU3 | closed | [] | false | 1,824 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Add OSCAR dataset card | https://api.github.com/repos/huggingface/datasets/issues/1824/events | null | https://api.github.com/repos/huggingface/datasets/issues/1824/labels{/name} | 2021-02-05T10:30:26Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1824",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1824"
} | 802,048,281 | [] | https://api.github.com/repos/huggingface/datasets/issues/1824 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | I started adding the dataset card for OSCAR !
For now it's just basic info for all the different configurations in `Dataset Structure`.
In particular the Data Splits section tells how may samples there are for each config. The Data Instances section show an example for each config, and it also shows the size in MB.... | 2021-05-05T18:24:14Z | https://github.com/huggingface/datasets/pull/1824 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1824/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1823/comments | https://api.github.com/repos/huggingface/datasets/issues/1823/timeline | 2021-03-01T10:21:39Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY4MjUyMjIx | closed | [] | false | 1,823 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add FewRel Dataset | https://api.github.com/repos/huggingface/datasets/issues/1823/events | null | https://api.github.com/repos/huggingface/datasets/issues/1823/labels{/name} | 2021-02-05T10:22:03Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1823.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1823",
"merged_at": "2021-03-01T10:21:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1823.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 802,042,181 | [] | https://api.github.com/repos/huggingface/datasets/issues/1823 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hi,
This PR closes this [Card](https://github.com/huggingface/datasets/projects/1#card-53285184) and Issue #1757.
I wasn't sure how to add `pid2name` along with the dataset so I added it as a separate configuration. For each (head, tail, tokens) triplet, I have created one example. I have added the dictionary key... | 2021-03-01T11:56:20Z | https://github.com/huggingface/datasets/pull/1823 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1823/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1822/comments | https://api.github.com/repos/huggingface/datasets/issues/1822/timeline | 2021-02-15T09:57:39Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY4MjIxMzIz | closed | [] | false | 1,822 | {
"avatar_url": "https://avatars.githubusercontent.com/u/33565881?v=4",
"events_url": "https://api.github.com/users/avinsit123/events{/privacy}",
"followers_url": "https://api.github.com/users/avinsit123/followers",
"following_url": "https://api.github.com/users/avinsit123/following{/other_user}",
"gists_url"... | Add Hindi Discourse Analysis Natural Language Inference Dataset | https://api.github.com/repos/huggingface/datasets/issues/1822/events | null | https://api.github.com/repos/huggingface/datasets/issues/1822/labels{/name} | 2021-02-05T09:30:54Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1822.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1822",
"merged_at": "2021-02-15T09:57:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1822.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 802,003,835 | [] | https://api.github.com/repos/huggingface/datasets/issues/1822 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | # Dataset Card for Hindi Discourse Analysis Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#dat... | 2021-02-15T09:57:39Z | https://github.com/huggingface/datasets/pull/1822 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1822/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1821/comments | https://api.github.com/repos/huggingface/datasets/issues/1821/timeline | 2021-02-09T17:39:27Z | null | completed | MDU6SXNzdWU4MDE3NDc2NDc= | closed | [] | null | 1,821 | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user... | Provide better exception message when one of many files results in an exception | https://api.github.com/repos/huggingface/datasets/issues/1821/events | null | https://api.github.com/repos/huggingface/datasets/issues/1821/labels{/name} | 2021-02-05T00:49:03Z | null | false | null | null | 801,747,647 | [] | https://api.github.com/repos/huggingface/datasets/issues/1821 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I find when I process many files, i.e.
```
train_files = glob.glob('rain*.csv')
validation_files = glob.glob(validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
```
I sometimes encounter an error due to one of the files being misformed (i.e. no dat... | 2021-02-09T17:39:27Z | https://github.com/huggingface/datasets/issues/1821 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1821/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1820/comments | https://api.github.com/repos/huggingface/datasets/issues/1820/timeline | 2021-02-05T14:00:00Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY3ODI4OTg1 | closed | [] | false | 1,820 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Add metrics usage examples and tests | https://api.github.com/repos/huggingface/datasets/issues/1820/events | null | https://api.github.com/repos/huggingface/datasets/issues/1820/labels{/name} | 2021-02-04T18:23:50Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1820.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1820",
"merged_at": "2021-02-05T14:00:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1820.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 801,529,936 | [] | https://api.github.com/repos/huggingface/datasets/issues/1820 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | All metrics finally have usage examples and proper fast + slow tests :)
I added examples of usage for every metric, and I use doctest to make sure they all work as expected.
For "slow" metrics such as bert_score or bleurt which require to download + run a transformer model, the download + forward pass are only do... | 2021-02-05T14:00:01Z | https://github.com/huggingface/datasets/pull/1820 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1820/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1819/comments | https://api.github.com/repos/huggingface/datasets/issues/1819/timeline | 2021-02-04T16:52:26Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY3NzYyMzI2 | closed | [] | false | 1,819 | {
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url"... | Fixed spelling `S3Fileystem` to `S3FileSystem` | https://api.github.com/repos/huggingface/datasets/issues/1819/events | null | https://api.github.com/repos/huggingface/datasets/issues/1819/labels{/name} | 2021-02-04T16:36:46Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1819.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1819",
"merged_at": "2021-02-04T16:52:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1819.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 801,448,670 | [] | https://api.github.com/repos/huggingface/datasets/issues/1819 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | Fixed documentation spelling errors.
Wrong `S3Fileystem`
Right `S3FileSystem` | 2021-02-04T16:52:27Z | https://github.com/huggingface/datasets/pull/1819 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1819/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1818/comments | https://api.github.com/repos/huggingface/datasets/issues/1818/timeline | 2022-06-01T15:38:42Z | null | completed | MDU6SXNzdWU4MDA5NTg3NzY= | closed | [] | null | 1,818 | {
"avatar_url": "https://avatars.githubusercontent.com/u/15032072?v=4",
"events_url": "https://api.github.com/users/Alxe1/events{/privacy}",
"followers_url": "https://api.github.com/users/Alxe1/followers",
"following_url": "https://api.github.com/users/Alxe1/following{/other_user}",
"gists_url": "https://api.... | Loading local dataset raise requests.exceptions.ConnectTimeout | https://api.github.com/repos/huggingface/datasets/issues/1818/events | null | https://api.github.com/repos/huggingface/datasets/issues/1818/labels{/name} | 2021-02-04T05:55:23Z | null | false | null | null | 800,958,776 | [] | https://api.github.com/repos/huggingface/datasets/issues/1818 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | Load local dataset:
```
dataset = load_dataset('json', data_files=["../../data/json.json"])
train = dataset["train"]
print(train.features)
train1 = train.map(lambda x: {"labels": 1})
print(train1[:2])
```
but it raised requests.exceptions.ConnectTimeout:
```
/Users/littlely/myvirtual/tf2/bin/python3.7 /Us... | 2022-06-01T15:38:42Z | https://github.com/huggingface/datasets/issues/1818 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1818/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1817/comments | https://api.github.com/repos/huggingface/datasets/issues/1817/timeline | 2022-10-05T12:42:57Z | null | completed | MDU6SXNzdWU4MDA4NzA2NTI= | closed | [] | null | 1,817 | {
"avatar_url": "https://avatars.githubusercontent.com/u/9610770?v=4",
"events_url": "https://api.github.com/users/LuCeHe/events{/privacy}",
"followers_url": "https://api.github.com/users/LuCeHe/followers",
"following_url": "https://api.github.com/users/LuCeHe/following{/other_user}",
"gists_url": "https://ap... | pyarrow.lib.ArrowInvalid: Column 1 named input_ids expected length 599 but got length 1500 | https://api.github.com/repos/huggingface/datasets/issues/1817/events | null | https://api.github.com/repos/huggingface/datasets/issues/1817/labels{/name} | 2021-02-04T02:30:23Z | null | false | null | null | 800,870,652 | [] | https://api.github.com/repos/huggingface/datasets/issues/1817 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | NONE | I am trying to preprocess any dataset in this package with GPT-2 tokenizer, so I need to structure the datasets as long sequences of text without padding. I've been following a couple of your tutorials and here you can find the script that is failing right at the end
https://github.com/LuCeHe/GenericTools/blob/maste... | 2022-10-05T12:42:57Z | https://github.com/huggingface/datasets/issues/1817 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1817/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1816/comments | https://api.github.com/repos/huggingface/datasets/issues/1816/timeline | 2021-02-15T15:04:33Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY3MTExMjEx | closed | [] | false | 1,816 | {
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "http... | Doc2dial rc update to latest version | https://api.github.com/repos/huggingface/datasets/issues/1816/events | null | https://api.github.com/repos/huggingface/datasets/issues/1816/labels{/name} | 2021-02-03T20:08:54Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1816.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1816",
"merged_at": "2021-02-15T15:04:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1816.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 800,660,995 | [] | https://api.github.com/repos/huggingface/datasets/issues/1816 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | 2021-02-15T15:15:24Z | https://github.com/huggingface/datasets/pull/1816 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1816/reactions"
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/1815/comments | https://api.github.com/repos/huggingface/datasets/issues/1815/timeline | 2021-03-01T10:36:21Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY3MDY3NjU1 | closed | [] | false | 1,815 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add CCAligned Multilingual Dataset | https://api.github.com/repos/huggingface/datasets/issues/1815/events | null | https://api.github.com/repos/huggingface/datasets/issues/1815/labels{/name} | 2021-02-03T18:59:52Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1815.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1815",
"merged_at": "2021-03-01T10:36:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1815.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 800,610,017 | [] | https://api.github.com/repos/huggingface/datasets/issues/1815 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Hello,
I'm trying to add [CCAligned Multilingual Dataset](http://www.statmt.org/cc-aligned/). This has the potential to close #1756.
This dataset has two types - Document-Pairs, and Sentence-Pairs.
The datasets are huge, so I won't be able to test all of them. At the same time, a user might only want to downlo... | 2021-03-01T12:33:03Z | https://github.com/huggingface/datasets/pull/1815 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1815/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1814/comments | https://api.github.com/repos/huggingface/datasets/issues/1814/timeline | 2021-02-04T16:21:48Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY2OTg4NTI1 | closed | [] | false | 1,814 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add Freebase QA Dataset | https://api.github.com/repos/huggingface/datasets/issues/1814/events | null | https://api.github.com/repos/huggingface/datasets/issues/1814/labels{/name} | 2021-02-03T16:57:49Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1814",
"merged_at": "2021-02-04T16:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 800,516,236 | [] | https://api.github.com/repos/huggingface/datasets/issues/1814 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Closes PR #1435. Fixed issues with PR #1809.
Requesting @lhoestq to review. | 2021-02-04T19:47:51Z | https://github.com/huggingface/datasets/pull/1814 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1814/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1813/comments | https://api.github.com/repos/huggingface/datasets/issues/1813/timeline | 2021-02-05T10:33:47Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY2OTIxNDcz | closed | [] | false | 1,813 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | Support future datasets | https://api.github.com/repos/huggingface/datasets/issues/1813/events | null | https://api.github.com/repos/huggingface/datasets/issues/1813/labels{/name} | 2021-02-03T15:26:49Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1813.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1813",
"merged_at": "2021-02-05T10:33:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1813.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 800,435,973 | [] | https://api.github.com/repos/huggingface/datasets/issues/1813 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | MEMBER | If a dataset is available at the version of the local installation of `datasets` (e.g. 1.2.0), then loading this dataset means loading the script at this version.
However when trying to load a dataset that is only available on master, currently users have to specify `script_version="master"` in `load_dataset` to mak... | 2021-02-05T10:33:48Z | https://github.com/huggingface/datasets/pull/1813 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1813/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1812/comments | https://api.github.com/repos/huggingface/datasets/issues/1812/timeline | 2021-02-08T10:39:06Z | null | null | MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy | closed | [] | false | 1,812 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add CIFAR-100 Dataset | https://api.github.com/repos/huggingface/datasets/issues/1812/events | null | https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name} | 2021-02-02T15:22:59Z | null | false | null | {
"diff_url": "https://github.com/huggingface/datasets/pull/1812.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1812",
"merged_at": "2021-02-08T10:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1812.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | 799,379,178 | [] | https://api.github.com/repos/huggingface/datasets/issues/1812 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | Adding CIFAR-100 Dataset. | 2021-02-08T11:10:18Z | https://github.com/huggingface/datasets/pull/1812 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions"
} | true |
https://api.github.com/repos/huggingface/datasets/issues/1811/comments | https://api.github.com/repos/huggingface/datasets/issues/1811/timeline | 2021-02-18T14:16:31Z | null | completed | MDU6SXNzdWU3OTkyMTEwNjA= | closed | [] | null | 1,811 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Unable to add Multi-label Datasets | https://api.github.com/repos/huggingface/datasets/issues/1811/events | null | https://api.github.com/repos/huggingface/datasets/issues/1811/labels{/name} | 2021-02-02T11:50:56Z | null | false | null | null | 799,211,060 | [] | https://api.github.com/repos/huggingface/datasets/issues/1811 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | I am trying to add [CIFAR-100](https://www.cs.toronto.edu/~kriz/cifar.html) dataset. The dataset contains two labels per image - `fine label` and `coarse label`. Using just one label in supervised keys as
`supervised_keys=("img", "fine_label")` raises no issue. But trying `supervised_keys=("img", "fine_label","coarse... | 2021-02-18T14:16:31Z | https://github.com/huggingface/datasets/issues/1811 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1811/reactions"
} | false |
https://api.github.com/repos/huggingface/datasets/issues/1810/comments | https://api.github.com/repos/huggingface/datasets/issues/1810/timeline | null | null | null | MDU6SXNzdWU3OTkxNjg2NTA= | open | [] | null | 1,810 | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url"... | Add Hateful Memes Dataset | https://api.github.com/repos/huggingface/datasets/issues/1810/events | null | https://api.github.com/repos/huggingface/datasets/issues/1810/labels{/name} | 2021-02-02T10:53:59Z | null | false | null | null | 799,168,650 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | https://api.github.com/repos/huggingface/datasets/issues/1810 | [
"",
""
] | https://api.github.com/repos/huggingface/datasets | CONTRIBUTOR | ## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [Thi... | 2021-12-08T12:03:59Z | https://github.com/huggingface/datasets/issues/1810 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1810/reactions"
} | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.