url stringlengths 61 61 | repository_url stringclasses 1 value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 758M 1.95B | node_id stringlengths 18 32 | number int64 1.2k 6.31k | title stringlengths 1 290 | user dict | labels listlengths 0 3 | state stringclasses 2 values | locked bool 1 class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3 values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 36.2k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app float64 | state_reason stringclasses 3 values | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5076/comments | https://api.github.com/repos/huggingface/datasets/issues/5076/events | https://github.com/huggingface/datasets/pull/5076 | 1,397,918,092 | PR_kwDODunzps5AOJp7 | 5,076 | fix: update exception throw from OSError to EnvironmentError in `push… | {
"avatar_url": "https://avatars.githubusercontent.com/u/29496999?v=4",
"events_url": "https://api.github.com/users/rahulXs/events{/privacy}",
"followers_url": "https://api.github.com/users/rahulXs/followers",
"following_url": "https://api.github.com/users/rahulXs/following{/other_user}",
"gists_url": "https://api.github.com/users/rahulXs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahulXs",
"id": 29496999,
"login": "rahulXs",
"node_id": "MDQ6VXNlcjI5NDk2OTk5",
"organizations_url": "https://api.github.com/users/rahulXs/orgs",
"received_events_url": "https://api.github.com/users/rahulXs/received_events",
"repos_url": "https://api.github.com/users/rahulXs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahulXs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahulXs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahulXs"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T14:46:29Z | 2022-10-07T14:35:57Z | 2022-10-07T14:33:27Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5076",
"merged_at": "2022-10-07T14:33:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5076"
} | Status:
Ready for review
Description of Changes:
Fixes #5075
Changes proposed in this pull request:
- Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5076/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3185/comments | https://api.github.com/repos/huggingface/datasets/issues/3185/events | https://github.com/huggingface/datasets/issues/3185 | 1,040,291,961 | I_kwDODunzps4-AZh5 | 3,185 | 7z dataset preview not implemented? | {
"avatar_url": "https://avatars.githubusercontent.com/u/30757466?v=4",
"events_url": "https://api.github.com/users/Kirili4ik/events{/privacy}",
"followers_url": "https://api.github.com/users/Kirili4ik/followers",
"following_url": "https://api.github.com/users/Kirili4ik/following{/other_user}",
"gists_url": "https://api.github.com/users/Kirili4ik/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Kirili4ik",
"id": 30757466,
"login": "Kirili4ik",
"node_id": "MDQ6VXNlcjMwNzU3NDY2",
"organizations_url": "https://api.github.com/users/Kirili4ik/orgs",
"received_events_url": "https://api.github.com/users/Kirili4ik/received_events",
"repos_url": "https://api.github.com/users/Kirili4ik/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Kirili4ik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Kirili4ik/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Kirili4ik"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"It's a bug in the dataset viewer: the dataset cannot be downloaded in streaming mode, but since the dataset is relatively small, the dataset viewer should have fallback to normal mode. Working on a fix.",
"Fixed. https://huggingface.co/datasets/samsum/viewer/samsum/train\r\n\r\n<img width=\"1563\" alt=\"Capture ... | 2021-10-30T20:18:27Z | 2022-04-12T11:48:16Z | 2022-04-12T11:48:07Z | NONE | null | null | null | ## Dataset viewer issue for dataset 'samsum'
**Link:** https://huggingface.co/datasets/samsum
Server Error
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol '7z' for file at 'https://arxiv.org/src/1911.12237v2/anc/corpus.7z' is not implemented yet
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3185/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1586/comments | https://api.github.com/repos/huggingface/datasets/issues/1586/events | https://github.com/huggingface/datasets/pull/1586 | 768,864,502 | MDExOlB1bGxSZXF1ZXN0NTQxMTY0MDc2 | 1,586 | added irc disentangle dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32560035?v=4",
"events_url": "https://api.github.com/users/dhruvjoshi1998/events{/privacy}",
"followers_url": "https://api.github.com/users/dhruvjoshi1998/followers",
"following_url": "https://api.github.com/users/dhruvjoshi1998/following{/other_user}",
"gists_url": "https://api.github.com/users/dhruvjoshi1998/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dhruvjoshi1998",
"id": 32560035,
"login": "dhruvjoshi1998",
"node_id": "MDQ6VXNlcjMyNTYwMDM1",
"organizations_url": "https://api.github.com/users/dhruvjoshi1998/orgs",
"received_events_url": "https://api.github.com/users/dhruvjoshi1998/received_events",
"repos_url": "https://api.github.com/users/dhruvjoshi1998/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dhruvjoshi1998/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dhruvjoshi1998/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dhruvjoshi1998"
} | [] | closed | false | null | [] | null | [
"@lhoestq sorry, this was the only way I was able to fix the pull request ",
"@lhoestq Thank you for the feedback. I wondering whether I should be passing an 'id' field in the dictionary since the 'connections' reference the 'id' of the linked messages. This 'id' would just be the same as the id_ that is in the y... | 2020-12-16T13:25:58Z | 2021-01-29T10:28:53Z | 2021-01-29T10:28:53Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1586.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1586",
"merged_at": "2021-01-29T10:28:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1586.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1586"
} | added irc disentanglement dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1586/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5557/comments | https://api.github.com/repos/huggingface/datasets/issues/5557/events | https://github.com/huggingface/datasets/pull/5557 | 1,593,545,324 | PR_kwDODunzps5Kbube | 5,557 | Add filter desc | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-21T14:04:42Z | 2023-02-21T14:19:54Z | 2023-02-21T14:12:39Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5557",
"merged_at": "2023-02-21T14:12:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5557"
} | Otherwise it would show a `Map` progress bar, since it uses `map` under the hood | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5557/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5557/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2503/comments | https://api.github.com/repos/huggingface/datasets/issues/2503/events | https://github.com/huggingface/datasets/issues/2503 | 920,636,186 | MDU6SXNzdWU5MjA2MzYxODY= | 2,503 | SubjQA wrong boolean values in entries | {
"avatar_url": "https://avatars.githubusercontent.com/u/26485052?v=4",
"events_url": "https://api.github.com/users/arnaudstiegler/events{/privacy}",
"followers_url": "https://api.github.com/users/arnaudstiegler/followers",
"following_url": "https://api.github.com/users/arnaudstiegler/following{/other_user}",
"gists_url": "https://api.github.com/users/arnaudstiegler/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arnaudstiegler",
"id": 26485052,
"login": "arnaudstiegler",
"node_id": "MDQ6VXNlcjI2NDg1MDUy",
"organizations_url": "https://api.github.com/users/arnaudstiegler/orgs",
"received_events_url": "https://api.github.com/users/arnaudstiegler/received_events",
"repos_url": "https://api.github.com/users/arnaudstiegler/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arnaudstiegler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnaudstiegler/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arnaudstiegler"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @arnaudstiegler, thanks for reporting. I'm investigating it.",
"@arnaudstiegler I have just checked that these mismatches are already present in the original dataset: https://github.com/megagonlabs/SubjQA\r\n\r\nWe are going to contact the dataset owners to report this.",
"I have:\r\n- opened an issue in th... | 2021-06-14T17:42:46Z | 2021-08-25T03:52:06Z | null | NONE | null | null | null | ## Describe the bug
SubjQA seems to have a boolean that's consistently wrong.
It defines:
- question_subj_level: The subjectiviy level of the question (on a 1 to 5 scale with 1 being the most subjective).
- is_ques_subjective: A boolean subjectivity label derived from question_subj_level (i.e., scores below 4 are considered as subjective)
However, `is_ques_subjective` seems to have wrong values in the entire dataset.
For instance, in the example in the dataset card, we have:
- "question_subj_level": 2
- "is_ques_subjective": false
However, according to the description, the question should be subjective since the `question_subj_level` is below 4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2503/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3219/comments | https://api.github.com/repos/huggingface/datasets/issues/3219/events | https://github.com/huggingface/datasets/issues/3219 | 1,045,095,000 | I_kwDODunzps4-SuJY | 3,219 | Eventual Invalid Token Error at setup of private datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2021-11-04T18:50:45Z | 2021-11-08T13:23:06Z | 2021-11-08T08:59:43Z | MEMBER | null | null | null | ## Describe the bug
From time to time, there appear Invalid Token errors with private datasets:
- https://app.circleci.com/pipelines/github/huggingface/datasets/8520/workflows/d44629f2-4749-40f8-a657-50931d0b3434/jobs/52534
```
____________ ERROR at setup of test_load_streaming_private_dataset _____________
ValueError: Invalid token passed!
____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____
ValueError: Invalid token passed!
=========================== short test summary info ============================
ERROR tests/test_load.py::test_load_streaming_private_dataset - ValueError: I...
ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data
```
- https://app.circleci.com/pipelines/github/huggingface/datasets/8557/workflows/a8383181-ba6d-4487-9d0a-f750b6dcb936/jobs/52763
```
____ ERROR at setup of test_load_streaming_private_dataset_with_zipped_data ____
[gw1] linux -- Python 3.6.15 /home/circleci/.pyenv/versions/3.6.15/bin/python3.6
hf_api = <huggingface_hub.hf_api.HfApi object at 0x7f4899bab908>
hf_token = 'vgNbyuaLNEBuGbgCEtSBCOcPjZnngJufHkTaZvHwkXKGkHpjBPwmLQuJVXRxBuaRzNlGjlMpYRPbthfHPFWXaaEDTLiqTTecYENxukRYVAAdpeApIUPxcgsowadkTkPj'
zip_csv_path = PosixPath('/tmp/pytest-of-circleci/pytest-0/popen-gw1/data16/dataset.csv.zip')
@pytest.fixture(scope="session")
def hf_private_dataset_repo_zipped_txt_data_(hf_api: HfApi, hf_token, zip_csv_path):
repo_name = "repo_zipped_txt_data-{}".format(int(time.time() * 10e3))
hf_api.create_repo(token=hf_token, name=repo_name, repo_type="dataset", private=True)
repo_id = f"{USER}/{repo_name}"
hf_api.upload_file(
token=hf_token,
path_or_fileobj=str(zip_csv_path),
path_in_repo="data.zip",
repo_id=repo_id,
> repo_type="dataset",
)
tests/hub_fixtures.py:68:
...
ValueError: Invalid token passed!
=========================== short test summary info ============================
ERROR tests/test_load.py::test_load_streaming_private_dataset_with_zipped_data
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3219/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4651/comments | https://api.github.com/repos/huggingface/datasets/issues/4651/events | https://github.com/huggingface/datasets/issues/4651 | 1,296,689,414 | I_kwDODunzps5NSekG | 4,651 | Add Flickr 30k Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/4755430?v=4",
"events_url": "https://api.github.com/users/omarespejel/events{/privacy}",
"followers_url": "https://api.github.com/users/omarespejel/followers",
"following_url": "https://api.github.com/users/omarespejel/following{/other_user}",
"gists_url": "https://api.github.com/users/omarespejel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/omarespejel",
"id": 4755430,
"login": "omarespejel",
"node_id": "MDQ6VXNlcjQ3NTU0MzA=",
"organizations_url": "https://api.github.com/users/omarespejel/orgs",
"received_events_url": "https://api.github.com/users/omarespejel/received_events",
"repos_url": "https://api.github.com/users/omarespejel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/omarespejel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/omarespejel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/omarespejel"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"uploaded dataset [here](https://huggingface.co/datasets/embedding-data/flickr30k-captions)."
] | 2022-07-07T01:59:08Z | 2022-07-14T02:09:45Z | 2022-07-14T02:09:45Z | NONE | null | null | null | ## Adding a Dataset
- **Name:** *Flickr 30k*
- **Description:** *To produce the denotation graph, we have created an image caption corpus consisting of 158,915 crowd-sourced captions describing 31,783 images. This is an extension of our previous Flickr 8k Dataset. The new images and captions focus on people involved in everyday activities and events.*
- **Paper:** *https://transacl.org/ojs/index.php/tacl/article/view/229/33*
- **Data:** *https://huggingface.co/datasets/sentence-transformers/embedding-training-data/resolve/main/flickr30k_captions.jsonl.gz*
- **Motivation:** *Dataset for training and evaluating models of conversational response*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4651/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5571/comments | https://api.github.com/repos/huggingface/datasets/issues/5571/events | https://github.com/huggingface/datasets/issues/5571 | 1,597,198,953 | I_kwDODunzps5fM1Jp | 5,571 | load_dataset fails for JSON in windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/11876897?v=4",
"events_url": "https://api.github.com/users/abinashsahu/events{/privacy}",
"followers_url": "https://api.github.com/users/abinashsahu/followers",
"following_url": "https://api.github.com/users/abinashsahu/following{/other_user}",
"gists_url": "https://api.github.com/users/abinashsahu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abinashsahu",
"id": 11876897,
"login": "abinashsahu",
"node_id": "MDQ6VXNlcjExODc2ODk3",
"organizations_url": "https://api.github.com/users/abinashsahu/orgs",
"received_events_url": "https://api.github.com/users/abinashsahu/received_events",
"repos_url": "https://api.github.com/users/abinashsahu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abinashsahu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abinashsahu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abinashsahu"
} | [] | closed | false | null | [] | null | [
"Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n",
"Thanks it worked!"
] | 2023-02-23T16:50:11Z | 2023-02-24T13:21:47Z | 2023-02-24T13:21:47Z | NONE | null | null | null | ### Describe the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Steps to reproduce the bug
Steps:
1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method.
2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json"
3. I am reading the file in my local PyCharm - the location of python file is different than the location of the JSON.
4. When I read using load_dataset("json",args.input_json), it throws and error from builder.py.
raise InvalidConfigName(
f"Bad characters from black list '{invalid_windows_characters}' found in '{self.name}'. "
f"They could create issues when creating a directory for this config on Windows filesystem."
6. When I bring the data to the current directory, it works fine.
### Expected behavior
Should be able to read from a path different than current directory in Windows machine.
### Environment info
datasets version: 2.3.1
python version: 3.8
Windows OS | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5571/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5046 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5046/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5046/comments | https://api.github.com/repos/huggingface/datasets/issues/5046/events | https://github.com/huggingface/datasets/issues/5046 | 1,391,372,519 | I_kwDODunzps5S7qjn | 5,046 | Audiofolder creates empty Dataset if files same level as metadata | {
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"events_url": "https://api.github.com/users/msis/events{/privacy}",
"followers_url": "https://api.github.com/users/msis/followers",
"following_url": "https://api.github.com/users/msis/following{/other_user}",
"gists_url": "https://api.github.com/users/msis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/msis",
"id": 577139,
"login": "msis",
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"organizations_url": "https://api.github.com/users/msis/orgs",
"received_events_url": "https://api.github.com/users/msis/received_events",
"repos_url": "https://api.github.com/users/msis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/msis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/msis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/msis"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_user}",
"gists_url": "https://api.github.com/users/riccardobucco/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/riccardobucco",
"id": 9295277,
"login": "riccardobucco",
"node_id": "MDQ6VXNlcjkyOTUyNzc=",
"organizations_url": "https://api.github.com/users/riccardobucco/orgs",
"received_events_url": "https://api.github.com/users/riccardobucco/received_events",
"repos_url": "https://api.github.com/users/riccardobucco/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/riccardobucco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/riccardobucco/subscriptions",
"type": "User",
"url": "https://api.github.com/users/riccardobucco"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/9295277?v=4",
"events_url": "https://api.github.com/users/riccardobucco/events{/privacy}",
"followers_url": "https://api.github.com/users/riccardobucco/followers",
"following_url": "https://api.github.com/users/riccardobucco/following{/other_u... | null | [
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: htt... | 2022-09-29T19:17:23Z | 2022-10-28T13:05:07Z | 2022-10-28T13:05:07Z | NONE | null | null | null | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain=1#L88
## Steps to reproduce the bug
`metadata.csv`:
```csv
file_name,duration,transcription
./2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav,10.768,hello
```
```python
>>> audio_dataset = load_dataset("audiofolder", data_dir="/audio-data/")
>>> audio_dataset
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
I've tried, with no success,:
- setting `split` to something else so I don't get a `DatasetDict`,
- removing the `./`,
- using `.jsonl`.
## Expected results
```
Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 1
})
```
## Actual results
```
DatasetDict({
train: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
validation: Dataset({
features: ['audio', 'duration', 'transcription'],
num_rows: 0
})
})
```
## Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.13.0-1025-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 9.0.0
- Pandas version: 1.5.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5046/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5046/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2963/comments | https://api.github.com/repos/huggingface/datasets/issues/2963/events | https://github.com/huggingface/datasets/issues/2963 | 1,006,588,605 | I_kwDODunzps47_1K9 | 2,963 | raise TypeError( TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects. | {
"avatar_url": "https://avatars.githubusercontent.com/u/40454218?v=4",
"events_url": "https://api.github.com/users/keloemma/events{/privacy}",
"followers_url": "https://api.github.com/users/keloemma/followers",
"following_url": "https://api.github.com/users/keloemma/following{/other_user}",
"gists_url": "https://api.github.com/users/keloemma/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/keloemma",
"id": 40454218,
"login": "keloemma",
"node_id": "MDQ6VXNlcjQwNDU0MjE4",
"organizations_url": "https://api.github.com/users/keloemma/orgs",
"received_events_url": "https://api.github.com/users/keloemma/received_events",
"repos_url": "https://api.github.com/users/keloemma/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/keloemma/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/keloemma/subscriptions",
"type": "User",
"url": "https://api.github.com/users/keloemma"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2021-09-24T15:35:11Z | 2021-09-24T15:38:24Z | 2021-09-24T15:38:24Z | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
I am trying to use Dataset to load my file in order to use Bert embeddings model baut when I finished loading using dataset and I want to pass to the tokenizer using the function map; I get the following error : raise TypeError(
TypeError: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>. Make sure provided `function` returns a variable of type `dict` to update the dataset or `None` if you are only interested in side effects.
I was able to load my file using dataset before but since this morning , I keep getting this erreor.
## Steps to reproduce the bug
```python
# Xtrain, ytrain, filename, len_labels = read_file_2(fic)
# Xtrain, lge_size = get_flaubert_layer(Xtrain, path_to_model_lge)
data_preprocessed = make_new_traindata(Xtrain)
my_dict = {"verbatim": data_preprocessed[1], "label": ytrain} # lemme avec conjonction
dataset = Dataset.from_dict(my_dict)
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2963/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2590/comments | https://api.github.com/repos/huggingface/datasets/issues/2590/events | https://github.com/huggingface/datasets/pull/2590 | 936,954,348 | MDExOlB1bGxSZXF1ZXN0NjgzNTg1MDg2 | 2,590 | Add language tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [] | closed | false | null | [] | null | [] | 2021-07-05T10:39:57Z | 2021-07-05T10:58:48Z | 2021-07-05T10:58:48Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2590.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2590",
"merged_at": "2021-07-05T10:58:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2590.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2590"
} | This PR adds some missing language tags needed for ASR datasets in #2565 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2590/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4825/comments | https://api.github.com/repos/huggingface/datasets/issues/4825/events | https://github.com/huggingface/datasets/pull/4825 | 1,335,856,882 | PR_kwDODunzps49BYWL | 4,825 | [Windows] Fix Access Denied when using os.rename() | {
"avatar_url": "https://avatars.githubusercontent.com/u/8703022?v=4",
"events_url": "https://api.github.com/users/DougTrajano/events{/privacy}",
"followers_url": "https://api.github.com/users/DougTrajano/followers",
"following_url": "https://api.github.com/users/DougTrajano/following{/other_user}",
"gists_url": "https://api.github.com/users/DougTrajano/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/DougTrajano",
"id": 8703022,
"login": "DougTrajano",
"node_id": "MDQ6VXNlcjg3MDMwMjI=",
"organizations_url": "https://api.github.com/users/DougTrajano/orgs",
"received_events_url": "https://api.github.com/users/DougTrajano/received_events",
"repos_url": "https://api.github.com/users/DougTrajano/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/DougTrajano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DougTrajano/subscriptions",
"type": "User",
"url": "https://api.github.com/users/DougTrajano"
} | [] | closed | false | null | [] | null | [
"Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?",
"> Cool thank you ! Maybe we can just replace `os.rename` by `shutil.move` instead ?\r\n\r\nYes, I think that could be a better solution, but I didn't test it in Linux (e.g. Ubuntu) to guarantee that `os.rename()` could be comple... | 2022-08-11T11:57:15Z | 2022-08-24T13:09:07Z | 2022-08-24T13:09:07Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4825",
"merged_at": "2022-08-24T13:09:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4825"
} | In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4825/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2309/comments | https://api.github.com/repos/huggingface/datasets/issues/2309/events | https://github.com/huggingface/datasets/pull/2309 | 874,644,990 | MDExOlB1bGxSZXF1ZXN0NjI5MTU4NjQx | 2,309 | Fix conda release | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-05-03T14:52:59Z | 2021-05-03T16:01:17Z | 2021-05-03T16:01:17Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2309",
"merged_at": "2021-05-03T16:01:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2309"
} | There were a few issues with conda releases (they've been failing for a while now).
To fix this I had to:
- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075))
- set the python version of the conda build stage to 3.8 since 3.9 isn't supported
- sync the evrsion requirement of `huggingface_hub`
With these changes I'm working on uploading all missing versions until 1.6.2 to conda
EDIT: I managed to build and upload all missing versions until 1.6.2 to conda :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2309/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1223/comments | https://api.github.com/repos/huggingface/datasets/issues/1223/events | https://github.com/huggingface/datasets/pull/1223 | 758,022,208 | MDExOlB1bGxSZXF1ZXN0NTMzMjY2MDc4 | 1,223 | 🇸🇪 Added Swedish Reviews dataset for sentiment classification in Sw… | {
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"events_url": "https://api.github.com/users/timpal0l/events{/privacy}",
"followers_url": "https://api.github.com/users/timpal0l/followers",
"following_url": "https://api.github.com/users/timpal0l/following{/other_user}",
"gists_url": "https://api.github.com/users/timpal0l/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/timpal0l",
"id": 6556710,
"login": "timpal0l",
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"organizations_url": "https://api.github.com/users/timpal0l/orgs",
"received_events_url": "https://api.github.com/users/timpal0l/received_events",
"repos_url": "https://api.github.com/users/timpal0l/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/timpal0l/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/timpal0l/subscriptions",
"type": "User",
"url": "https://api.github.com/users/timpal0l"
} | [] | closed | false | null | [] | null | [] | 2020-12-06T21:02:54Z | 2020-12-08T10:54:56Z | 2020-12-08T10:54:56Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1223",
"merged_at": "2020-12-08T10:54:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1223"
} | perhaps: @lhoestq 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1223/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5264/comments | https://api.github.com/repos/huggingface/datasets/issues/5264/events | https://github.com/huggingface/datasets/issues/5264 | 1,455,252,906 | I_kwDODunzps5WvWWq | 5,264 | `datasets` can't read a Parquet file in Python 3.9.13 | {
"avatar_url": "https://avatars.githubusercontent.com/u/44069155?v=4",
"events_url": "https://api.github.com/users/loubnabnl/events{/privacy}",
"followers_url": "https://api.github.com/users/loubnabnl/followers",
"following_url": "https://api.github.com/users/loubnabnl/following{/other_user}",
"gists_url": "https://api.github.com/users/loubnabnl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/loubnabnl",
"id": 44069155,
"login": "loubnabnl",
"node_id": "MDQ6VXNlcjQ0MDY5MTU1",
"organizations_url": "https://api.github.com/users/loubnabnl/orgs",
"received_events_url": "https://api.github.com/users/loubnabnl/received_events",
"repos_url": "https://api.github.com/users/loubnabnl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/loubnabnl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/loubnabnl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/loubnabnl"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Could you share the full stack trace please ?\r\n\r\n\r\nCan you also try running this code ? It can be useful to determine if the issue comes from `datasets` or `fsspec` (streaming) or `pyarrow` (parquet reading):\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=True)\r... | 2022-11-18T14:44:01Z | 2023-05-07T09:52:59Z | 2022-11-22T11:18:08Z | NONE | null | null | null | ### Describe the bug
I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset
```python
from datasets import load_dataset
ds = load_dataset("bigcode/the-stack-dedup-pjj", data_dir="data/java", split="train", revision="v1.1.a1", use_auth_token=True)
````
```
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
```
It seems to be an issue with new Python versions, Because it works in these two environements:
```
- `datasets` version: 2.6.1
- Platform: Linux-5.4.0-131-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
But not in this:
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
```
### Steps to reproduce the bug
Load the dataset in python 3.9.13
### Expected behavior
Load the dataset without the pyarrow error.
### Environment info
```
- `datasets` version: 2.6.1
- Platform: Linux-4.19.0-22-cloud-amd64-x86_64-with-glibc2.28
- Python version: 3.9.13
- PyArrow version: 9.0.0
- Pandas version: 1.3.4
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5264/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5264/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4227/comments | https://api.github.com/repos/huggingface/datasets/issues/4227/events | https://github.com/huggingface/datasets/pull/4227 | 1,216,455,316 | PR_kwDODunzps420-mc | 4,227 | Add f1 metric card, update docstring in py file | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-26T20:41:03Z | 2022-05-03T12:50:23Z | 2022-05-03T12:43:33Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4227",
"merged_at": "2022-05-03T12:43:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4227"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4227/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5703/comments | https://api.github.com/repos/huggingface/datasets/issues/5703/events | https://github.com/huggingface/datasets/pull/5703 | 1,653,158,955 | PR_kwDODunzps5NjCCV | 5,703 | [WIP][Test, Please ignore] Investigate performance impact of using multiprocessing only | {
"avatar_url": "https://avatars.githubusercontent.com/u/1535968?v=4",
"events_url": "https://api.github.com/users/hvaara/events{/privacy}",
"followers_url": "https://api.github.com/users/hvaara/followers",
"following_url": "https://api.github.com/users/hvaara/following{/other_user}",
"gists_url": "https://api.github.com/users/hvaara/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hvaara",
"id": 1535968,
"login": "hvaara",
"node_id": "MDQ6VXNlcjE1MzU5Njg=",
"organizations_url": "https://api.github.com/users/hvaara/orgs",
"received_events_url": "https://api.github.com/users/hvaara/received_events",
"repos_url": "https://api.github.com/users/hvaara/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hvaara/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hvaara/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hvaara"
} | [] | closed | false | null | [] | null | [
"`multiprocess` uses `dill` instead of `pickle` for pickling shared objects and, as such, can pickle more types than `multiprocessing`. And I don't think this is something we want to change :).",
"That makes sense to me, and I don't think you should merge this change. I was only curious about the performance impa... | 2023-04-04T04:37:49Z | 2023-04-20T03:17:37Z | 2023-04-20T03:17:32Z | NONE | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5703.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5703",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5703.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5703"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5703/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5333/comments | https://api.github.com/repos/huggingface/datasets/issues/5333/events | https://github.com/huggingface/datasets/pull/5333 | 1,476,890,156 | PR_kwDODunzps5EXGQ2 | 5,333 | fix: 🐛 pass the token to get the list of config names | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-12-05T16:06:09Z | 2022-12-06T08:25:17Z | 2022-12-06T08:22:49Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5333",
"merged_at": "2022-12-06T08:22:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5333"
} | Otherwise, get_dataset_infos doesn't work on gated or private datasets, even with the correct token. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5333/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3238/comments | https://api.github.com/repos/huggingface/datasets/issues/3238/events | https://github.com/huggingface/datasets/issues/3238 | 1,048,226,086 | I_kwDODunzps4-eqkm | 3,238 | Reuters21578 Couldn't reach | {
"avatar_url": "https://avatars.githubusercontent.com/u/54096137?v=4",
"events_url": "https://api.github.com/users/TingNLP/events{/privacy}",
"followers_url": "https://api.github.com/users/TingNLP/followers",
"following_url": "https://api.github.com/users/TingNLP/following{/other_user}",
"gists_url": "https://api.github.com/users/TingNLP/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TingNLP",
"id": 54096137,
"login": "TingNLP",
"node_id": "MDQ6VXNlcjU0MDk2MTM3",
"organizations_url": "https://api.github.com/users/TingNLP/orgs",
"received_events_url": "https://api.github.com/users/TingNLP/received_events",
"repos_url": "https://api.github.com/users/TingNLP/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TingNLP/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TingNLP/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TingNLP"
} | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | [] | null | [
"Hi ! The URL works fine on my side today, could you try again ?",
"thank you @lhoestq \r\nit works"
] | 2021-11-09T06:08:56Z | 2021-11-11T00:02:57Z | 2021-11-11T00:02:57Z | NONE | null | null | null | ``## Adding a Dataset
- **Name:** *Reuters21578*
- **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz*
- **Data:** *https://huggingface.co/datasets/reuters21578*
`from datasets import load_dataset`
`dataset = load_dataset("reuters21578", 'ModLewis')`
ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz
And I try to request the link as follow:
`import requests`
`requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')`
SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
This problem likes #575
What should I do ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3238/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4534/comments | https://api.github.com/repos/huggingface/datasets/issues/4534/events | https://github.com/huggingface/datasets/pull/4534 | 1,277,897,197 | PR_kwDODunzps46AFK_ | 4,534 | Add `tldr_news` dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32683010?v=4",
"events_url": "https://api.github.com/users/JulesBelveze/events{/privacy}",
"followers_url": "https://api.github.com/users/JulesBelveze/followers",
"following_url": "https://api.github.com/users/JulesBelveze/following{/other_user}",
"gists_url": "https://api.github.com/users/JulesBelveze/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JulesBelveze",
"id": 32683010,
"login": "JulesBelveze",
"node_id": "MDQ6VXNlcjMyNjgzMDEw",
"organizations_url": "https://api.github.com/users/JulesBelveze/orgs",
"received_events_url": "https://api.github.com/users/JulesBelveze/received_events",
"repos_url": "https://api.github.com/users/JulesBelveze/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JulesBelveze/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JulesBelveze/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JulesBelveze"
} | [] | closed | false | null | [] | null | [
"Hey @lhoestq, \r\nSorry for opening a PR, I was following the guide [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)! Thanks for the review anyway, I will follow the instructions you sent 😃 ",
"Thanks, we will update the guide ;)"
] | 2022-06-21T05:02:43Z | 2022-06-23T14:33:54Z | 2022-06-21T14:21:11Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4534.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4534",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4534.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4534"
} | This PR aims at adding support for a news dataset: `tldr news`.
This dataset is based on the daily [tldr tech newsletter](https://tldr.tech/newsletter) and contains a `headline` as well as a `content` for every piece of news contained in a newsletter. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4534/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5668/comments | https://api.github.com/repos/huggingface/datasets/issues/5668/events | https://github.com/huggingface/datasets/pull/5668 | 1,638,018,598 | PR_kwDODunzps5MwuIp | 5,668 | Support for downloading only provided split | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | open | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5668). All of your documentation changes will be reflected on that endpoint.",
"My previous comment didn't create the retro-link in the PR. I write it here again.\r\n\r\nYou can check the context and the discussions we had abou... | 2023-03-23T17:53:39Z | 2023-03-24T06:43:14Z | null | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5668.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5668",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5668.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5668"
} | We can pass split to `_split_generators()`.
But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5668/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2475/comments | https://api.github.com/repos/huggingface/datasets/issues/2475/events | https://github.com/huggingface/datasets/issues/2475 | 917,650,882 | MDU6SXNzdWU5MTc2NTA4ODI= | 2,475 | Issue in timit_asr database | {
"avatar_url": "https://avatars.githubusercontent.com/u/85702107?v=4",
"events_url": "https://api.github.com/users/hrahamim/events{/privacy}",
"followers_url": "https://api.github.com/users/hrahamim/followers",
"following_url": "https://api.github.com/users/hrahamim/following{/other_user}",
"gists_url": "https://api.github.com/users/hrahamim/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hrahamim",
"id": 85702107,
"login": "hrahamim",
"node_id": "MDQ6VXNlcjg1NzAyMTA3",
"organizations_url": "https://api.github.com/users/hrahamim/orgs",
"received_events_url": "https://api.github.com/users/hrahamim/received_events",
"repos_url": "https://api.github.com/users/hrahamim/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hrahamim/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hrahamim/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hrahamim"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"This bug was fixed in #1995. Upgrading datasets to version 1.6 fixes the issue!",
"Indeed was a fixed bug.\r\nWorks on version 1.8\r\nThanks "
] | 2021-06-10T18:05:29Z | 2021-06-13T08:13:50Z | 2021-06-13T08:13:13Z | NONE | null | null | null | ## Describe the bug
I am trying to load the timit_asr dataset however only the first record is shown (duplicated over all the rows).
I am using the next code line
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
The above code result with the same sentence duplicated ten times.
It also happens when I use the dataset viewer at Streamlit .
## Steps to reproduce the bug
from datasets import load_dataset
dataset = load_dataset(“timit_asr”, split=“test”).shuffle().select(range(10))
data = dataset.to_pandas()
# Sample code to reproduce the bug
```
## Expected results
table with different row information
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.4.1 (also occur in the latest version)
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 1.15.3 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2475/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3413/comments | https://api.github.com/repos/huggingface/datasets/issues/3413/events | https://github.com/huggingface/datasets/pull/3413 | 1,075,854,325 | PR_kwDODunzps4voNZv | 3,413 | Add WIDER FACE dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-12-09T18:03:38Z | 2022-01-12T14:13:47Z | 2022-01-12T14:13:47Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3413.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3413",
"merged_at": "2022-01-12T14:13:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3413.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3413"
} | Adds the WIDER FACE face detection benchmark.
TODOs:
* [x] dataset card
* [x] dummy data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3413/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2295 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2295/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2295/comments | https://api.github.com/repos/huggingface/datasets/issues/2295/events | https://github.com/huggingface/datasets/pull/2295 | 872,902,867 | MDExOlB1bGxSZXF1ZXN0NjI3NzY0NDk3 | 2,295 | Create ExtractManager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior",
"id": 2851292821,
"name": "refactoring",
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring"
}
] | closed | false | null | [] | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | [
"Hi @lhoestq,\r\n\r\nOnce that #2578 has been merged, I would like to ask you to have a look at this PR: it implements the same logic as the one in #2578 but for all the other file compression formats.\r\n\r\nThanks.",
"I think all is done @lhoestq ;)"
] | 2021-04-30T17:13:34Z | 2021-07-12T14:12:03Z | 2021-07-08T08:11:49Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2295.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2295",
"merged_at": "2021-07-08T08:11:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2295.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2295"
} | Perform refactoring to decouple extract functionality. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2295/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2295/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5048 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5048/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5048/comments | https://api.github.com/repos/huggingface/datasets/issues/5048/events | https://github.com/huggingface/datasets/pull/5048 | 1,392,170,680 | PR_kwDODunzps4_7KI2 | 5,048 | Fix bug with labels of eurlex config of lex_glue dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4",
"events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}",
"followers_url": "https://api.github.com/users/iliaschalkidis/followers",
"following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}",
"gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/iliaschalkidis",
"id": 1626984,
"login": "iliaschalkidis",
"node_id": "MDQ6VXNlcjE2MjY5ODQ=",
"organizations_url": "https://api.github.com/users/iliaschalkidis/orgs",
"received_events_url": "https://api.github.com/users/iliaschalkidis/received_events",
"repos_url": "https://api.github.com/users/iliaschalkidis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/iliaschalkidis"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@JamesLYC88 here is the fix! Thanks again!",
"Thanks, @albertvillanova. When do you expect that this change will take effect when someone downloads the dataset?",
"The change is immediately available now, since this change we mad... | 2022-09-30T09:47:12Z | 2022-09-30T16:30:25Z | 2022-09-30T16:21:41Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5048.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5048",
"merged_at": "2022-09-30T16:21:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5048.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5048"
} | Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequent concepts from level 2 [...]”._ The current label list has all 127 labels, which leads to different (lower) results, as communicated by users.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5048/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5048/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4844/comments | https://api.github.com/repos/huggingface/datasets/issues/4844/events | https://github.com/huggingface/datasets/pull/4844 | 1,337,878,249 | PR_kwDODunzps49IFLa | 4,844 | Add 'val' to VALIDATION_KEYWORDS. | {
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/akt42",
"id": 98386959,
"login": "akt42",
"node_id": "U_kgDOBd1EDw",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"repos_url": "https://api.github.com/users/akt42/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"type": "User",
"url": "https://api.github.com/users/akt42"
} | [] | closed | false | null | [] | null | [
"@mariosasko not sure about how the reviewing process works. Maybe you can have a look because we discussed this elsewhere?",
"Hi, thanks! \r\n\r\nLet's add one pattern with `val` to this test before merging: \r\nhttps://github.com/huggingface/datasets/blob/b88a656cf94c4ad972154371c83c1af759fde522/tests/test_data... | 2022-08-13T06:49:41Z | 2022-08-30T10:17:35Z | 2022-08-30T10:14:54Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4844.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4844",
"merged_at": "2022-08-30T10:14:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4844.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4844"
} | This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well.
I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4844/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4200/comments | https://api.github.com/repos/huggingface/datasets/issues/4200/events | https://github.com/huggingface/datasets/pull/4200 | 1,211,980,110 | PR_kwDODunzps42mz0w | 4,200 | Add to docs how to load from local script | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-22T08:08:25Z | 2022-05-06T08:39:25Z | 2022-04-23T05:47:25Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4200.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4200",
"merged_at": "2022-04-23T05:47:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4200.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4200"
} | This option was missing from the docs guide (it was only explained in the docstring of `load_dataset`). Although this is an infrequent use case, there might be some users interested in it.
Related to #4192
CC: @stevhliu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4200/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4785 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4785/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4785/comments | https://api.github.com/repos/huggingface/datasets/issues/4785/events | https://github.com/huggingface/datasets/pull/4785 | 1,327,225,826 | PR_kwDODunzps48k8y4 | 4,785 | Require torchaudio<0.12.0 in docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-03T13:32:00Z | 2022-08-03T15:07:43Z | 2022-08-03T14:52:16Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4785.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4785",
"merged_at": "2022-08-03T14:52:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4785.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4785"
} | This PR adds to docs the requirement of torchaudio<0.12.0 to avoid RuntimeError.
Subsequent to PR:
- #4777 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4785/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4785/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5625/comments | https://api.github.com/repos/huggingface/datasets/issues/5625/events | https://github.com/huggingface/datasets/issues/5625 | 1,618,971,855 | I_kwDODunzps5gf4zP | 5,625 | Allow "jsonl" data type signifier | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BramVanroy",
"id": 2779410,
"login": "BramVanroy",
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"organizations_url": "https://api.github.com/users/BramVanroy/orgs",
"received_events_url": "https://api.github.com/users/BramVanroy/received_events",
"repos_url": "https://api.github.com/users/BramVanroy/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BramVanroy"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"You can use \"json\" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. \"text\", \"imagefolder\", etc. I don't think the example in `transformers` is correct because of that",
"Yes, I understand the reasoning but this issue is to propose that the example in transformers (whil... | 2023-03-10T13:21:48Z | 2023-03-11T10:35:39Z | null | CONTRIBUTOR | null | null | null | ### Feature request
`load_dataset` currently does not accept `jsonl` as type but only `json`.
### Motivation
I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because
```
FileNotFoundError: Couldn't find a dataset script at jsonl\jsonl.py or any data file in the same directory. Couldn't find 'jsonl' on the Hugging Face Hub either: FileNotFoundError: Dataset 'jsonl' doesn't exist on the Hub. If the repo is private or gated, make sure to log in with `huggingface-cli login`.
```
The reason is because the script has these lines to extract the data type by its extension. Therefore, the derived type is `jsonl` which is not recognized by datasets as the error above shows.
https://github.com/huggingface/transformers/blob/ade26bf9912f69e2110137443e4406d7dbe253e7/examples/pytorch/translation/run_translation.py#L342-L356
I suppose you could argue that this is the script's fault (in which case I'll do a PR over at `transformers`) but it makes sense to me to add `jsonl` as an alias to `json` in `datasets`.
### Your contribution
At the moment I cannot work on this. I think it can be as "easy" as having an alias for json, namely jsonl. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5625/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3054/comments | https://api.github.com/repos/huggingface/datasets/issues/3054/events | https://github.com/huggingface/datasets/pull/3054 | 1,022,108,186 | PR_kwDODunzps4s_TmE | 3,054 | Update Biosses | {
"avatar_url": "https://avatars.githubusercontent.com/u/6764450?v=4",
"events_url": "https://api.github.com/users/bwang482/events{/privacy}",
"followers_url": "https://api.github.com/users/bwang482/followers",
"following_url": "https://api.github.com/users/bwang482/following{/other_user}",
"gists_url": "https://api.github.com/users/bwang482/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bwang482",
"id": 6764450,
"login": "bwang482",
"node_id": "MDQ6VXNlcjY3NjQ0NTA=",
"organizations_url": "https://api.github.com/users/bwang482/orgs",
"received_events_url": "https://api.github.com/users/bwang482/received_events",
"repos_url": "https://api.github.com/users/bwang482/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bwang482/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bwang482/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bwang482"
} | [] | closed | false | null | [] | null | [] | 2021-10-10T22:25:12Z | 2021-10-13T09:04:27Z | 2021-10-13T09:04:27Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3054.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3054",
"merged_at": "2021-10-13T09:04:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3054.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3054"
} | Fix variable naming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3054/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6066/comments | https://api.github.com/repos/huggingface/datasets/issues/6066/events | https://github.com/huggingface/datasets/issues/6066 | 1,819,717,542 | I_kwDODunzps5sdq-m | 6,066 | AttributeError: '_tqdm_cls' object has no attribute '_lock' | {
"avatar_url": "https://avatars.githubusercontent.com/u/138426806?v=4",
"events_url": "https://api.github.com/users/codingl2k1/events{/privacy}",
"followers_url": "https://api.github.com/users/codingl2k1/followers",
"following_url": "https://api.github.com/users/codingl2k1/following{/other_user}",
"gists_url": "https://api.github.com/users/codingl2k1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/codingl2k1",
"id": 138426806,
"login": "codingl2k1",
"node_id": "U_kgDOCEA5tg",
"organizations_url": "https://api.github.com/users/codingl2k1/orgs",
"received_events_url": "https://api.github.com/users/codingl2k1/received_events",
"repos_url": "https://api.github.com/users/codingl2k1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/codingl2k1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/codingl2k1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/codingl2k1"
} | [] | closed | false | null | [] | null | [
"Hi ! I opened https://github.com/huggingface/datasets/pull/6067 to add the missing `_lock`\r\n\r\nWe'll do a patch release soon, but feel free to install `datasets` from source in the meantime",
"I have tested the latest main, it does not work.\r\n\r\nI add more logs to reproduce this issue, it looks like a mult... | 2023-07-25T07:24:36Z | 2023-07-26T10:56:25Z | 2023-07-26T10:56:24Z | NONE | null | null | null | ### Describe the bug
```python
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/load.py", line 1034, in get_module
data_files = DataFilesDict.from_patterns(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 671, in from_patterns
DataFilesList.from_patterns(
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 586, in from_patterns
origin_metadata = _get_origin_metadata(data_files, download_config=download_config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/datasets/data_files.py", line 502, in _get_origin_metadata
return thread_map(
^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 70, in thread_map
return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 48, in _executor_map
with ensure_lock(tqdm_class, lock_name=lock_name) as lk:
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/Users/codingl2k1/.pyenv/versions/3.11.4/lib/python3.11/site-packages/tqdm/contrib/concurrent.py", line 25, in ensure_lock
del tqdm_class._lock
^^^^^^^^^^^^^^^^
AttributeError: '_tqdm_cls' object has no attribute '_lock'
```
### Steps to reproduce the bug
Happens ocasionally.
### Expected behavior
I added a print in tqdm `ensure_lock()`, got a `ensure_lock <datasets.utils.logging._tqdm_cls object at 0x16dddead0> ` print.
According to the code in https://github.com/tqdm/tqdm/blob/master/tqdm/contrib/concurrent.py#L24
```python
@contextmanager
def ensure_lock(tqdm_class, lock_name=""):
"""get (create if necessary) and then restore `tqdm_class`'s lock"""
print("ensure_lock", tqdm_class, lock_name)
old_lock = getattr(tqdm_class, '_lock', None) # don't create a new lock
lock = old_lock or tqdm_class.get_lock() # maybe create a new lock
lock = getattr(lock, lock_name, lock) # maybe subtype
tqdm_class.set_lock(lock)
yield lock
if old_lock is None:
del tqdm_class._lock # <-- It tries to del the `_lock` attribute from tqdm_class.
else:
tqdm_class.set_lock(old_lock)
```
But, huggingface datasets `datasets.utils.logging._tqdm_cls` does not have the field `_lock`: https://github.com/huggingface/datasets/blob/main/src/datasets/utils/logging.py#L205
```python
class _tqdm_cls:
def __call__(self, *args, disable=False, **kwargs):
if _tqdm_active and not disable:
return tqdm_lib.tqdm(*args, **kwargs)
else:
return EmptyTqdm(*args, **kwargs)
def set_lock(self, *args, **kwargs):
self._lock = None
if _tqdm_active:
return tqdm_lib.tqdm.set_lock(*args, **kwargs)
def get_lock(self):
if _tqdm_active:
return tqdm_lib.tqdm.get_lock()
```
### Environment info
Python 3.11.4
tqdm '4.65.0'
datasets master | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6066/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2549/comments | https://api.github.com/repos/huggingface/datasets/issues/2549/events | https://github.com/huggingface/datasets/issues/2549 | 929,819,093 | MDU6SXNzdWU5Mjk4MTkwOTM= | 2,549 | Handling unlabeled datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nelson-liu",
"id": 7272031,
"login": "nelson-liu",
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nelson-liu"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi @nelson-liu,\r\n\r\nYou can pass the parameter `features` to `load_dataset`: https://huggingface.co/docs/datasets/_modules/datasets/load.html#load_dataset\r\n\r\nIf you look at the code of the MNLI script you referred in your question (https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi... | 2021-06-25T04:32:23Z | 2021-06-25T21:07:57Z | 2021-06-25T21:07:56Z | NONE | null | null | null | Hi!
Is there a way for datasets to produce unlabeled instances (e.g., the `ClassLabel` can be nullable).
For example, I want to use the MNLI dataset reader ( https://github.com/huggingface/datasets/blob/master/datasets/multi_nli/multi_nli.py ) on a file that doesn't have the `gold_label` field. I tried setting `"label": data.get("gold_label")`, but got the following error:
```
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/load.py", line 748, in load_dataset
use_auth_token=use_auth_token,
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 652, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/builder.py", line 989, in _prepare_split
example = self.info.features.encode_example(record)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 953, in encode_example
return encode_nested_example(self, example)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in encode_nested_example
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 848, in <dictcomp>
k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 875, in encode_nested_example
return schema.encode_example(obj)
File "/home/nfliu/miniconda3/envs/debias/lib/python3.7/site-packages/datasets/features.py", line 653, in encode_example
if not -1 <= example_data < self.num_classes:
TypeError: '<=' not supported between instances of 'int' and 'NoneType'
```
What's the proper way to handle reading unlabeled datasets, especially for downstream usage with Transformers? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2549/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4910/comments | https://api.github.com/repos/huggingface/datasets/issues/4910/events | https://github.com/huggingface/datasets/issues/4910 | 1,354,374,328 | I_kwDODunzps5Quhy4 | 4,910 | Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder() | {
"avatar_url": "https://avatars.githubusercontent.com/u/57184353?v=4",
"events_url": "https://api.github.com/users/bablf/events{/privacy}",
"followers_url": "https://api.github.com/users/bablf/followers",
"following_url": "https://api.github.com/users/bablf/following{/other_user}",
"gists_url": "https://api.github.com/users/bablf/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/bablf",
"id": 57184353,
"login": "bablf",
"node_id": "MDQ6VXNlcjU3MTg0MzUz",
"organizations_url": "https://api.github.com/users/bablf/orgs",
"received_events_url": "https://api.github.com/users/bablf/received_events",
"repos_url": "https://api.github.com/users/bablf/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/bablf/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bablf/subscriptions",
"type": "User",
"url": "https://api.github.com/users/bablf"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "7057ff",
"default": true,
"descript... | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4",
"events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}",
"followers_url": "https://api.github.com/users/thepurpleowl/followers",
"following_url": "https://api.github.com/users/thepurpleowl/following{/other_user}",
"gists_url": "https://api.github.com/users/thepurpleowl/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thepurpleowl",
"id": 21123710,
"login": "thepurpleowl",
"node_id": "MDQ6VXNlcjIxMTIzNzEw",
"organizations_url": "https://api.github.com/users/thepurpleowl/orgs",
"received_events_url": "https://api.github.com/users/thepurpleowl/received_events",
"repos_url": "https://api.github.com/users/thepurpleowl/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thepurpleowl/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thepurpleowl/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thepurpleowl"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/21123710?v=4",
"events_url": "https://api.github.com/users/thepurpleowl/events{/privacy}",
"followers_url": "https://api.github.com/users/thepurpleowl/followers",
"following_url": "https://api.github.com/users/thepurpleowl/following{/other_use... | null | [
"I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0",... | 2022-08-29T14:11:48Z | 2022-09-13T11:58:46Z | null | NONE | null | null | null | ## Describe the bug
In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz").
I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix would be
```python
builder_cls = import_main_class(dataset_module.module_path)
builder_kwargs = dataset_module.builder_kwargs
data_files = builder_kwargs.pop("data_files", data_files)
config_name = builder_kwargs.pop("config_name", name)
hash = builder_kwargs.pop("hash")
base_path = builder_kwargs.pop("base_path")
```
and then pass base_path into `builder_cls`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("rotten_tomatoes", base_path="./sample_data")
```
## Expected results
The docs state: `**config_kwargs` — Keyword arguments to be passed to the [BuilderConfig](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.BuilderConfig) and used in the [DatasetBuilder](https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/builder_classes#datasets.DatasetBuilder).
So I would expect to be able to pass the base_path into `load_dataset()`.
## Actual results
TypeError("type object got multiple values for keyword argument "base_path").
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: macOS-12.5-arm64-arm-64bit
- Python version: 3.8.9
- PyArrow version: 9.0.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4910/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1550/comments | https://api.github.com/repos/huggingface/datasets/issues/1550/events | https://github.com/huggingface/datasets/pull/1550 | 765,620,925 | MDExOlB1bGxSZXF1ZXN0NTM5MDEwMDY1 | 1,550 | Add offensive langauge dravidian dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7421838?v=4",
"events_url": "https://api.github.com/users/jamespaultg/events{/privacy}",
"followers_url": "https://api.github.com/users/jamespaultg/followers",
"following_url": "https://api.github.com/users/jamespaultg/following{/other_user}",
"gists_url": "https://api.github.com/users/jamespaultg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jamespaultg",
"id": 7421838,
"login": "jamespaultg",
"node_id": "MDQ6VXNlcjc0MjE4Mzg=",
"organizations_url": "https://api.github.com/users/jamespaultg/orgs",
"received_events_url": "https://api.github.com/users/jamespaultg/received_events",
"repos_url": "https://api.github.com/users/jamespaultg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jamespaultg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamespaultg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jamespaultg"
} | [] | closed | false | null | [] | null | [
"Thanks much!"
] | 2020-12-13T19:54:19Z | 2020-12-18T15:52:49Z | 2020-12-18T14:25:30Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1550",
"merged_at": "2020-12-18T14:25:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1550"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1550/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1550/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/2209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2209/comments | https://api.github.com/repos/huggingface/datasets/issues/2209/events | https://github.com/huggingface/datasets/pull/2209 | 855,638,232 | MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2 | 2,209 | Add code of conduct to the project | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | [] | null | [] | 2021-04-12T07:16:14Z | 2021-04-12T17:55:52Z | 2021-04-12T17:55:52Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2209.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2209",
"merged_at": "2021-04-12T17:55:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2209.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2209"
} | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2209/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3699/comments | https://api.github.com/repos/huggingface/datasets/issues/3699/events | https://github.com/huggingface/datasets/pull/3699 | 1,130,200,593 | PR_kwDODunzps4yY49I | 3,699 | Add dev-only config to Natural Questions dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Great thanks ! I think we can fix the CI by copying the NQ folder on gcs to 0.0.3. Does that sound good ?",
"I've copied the 0.0.2 folder content to 0.0.3, as suggested.\r\n\r\nI'm updating the dataset card..."
] | 2022-02-10T14:42:24Z | 2022-02-11T09:50:22Z | 2022-02-11T09:50:21Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3699.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3699",
"merged_at": "2022-02-11T09:50:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3699.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3699"
} | As suggested by @lhoestq and @thomwolf, a new config has been added to Natural Questions dataset, so that only dev split can be downloaded.
Fix #413. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3699/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3699/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1930/comments | https://api.github.com/repos/huggingface/datasets/issues/1930/events | https://github.com/huggingface/datasets/pull/1930 | 814,055,198 | MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0 | 1,930 | updated the wino_bias dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}",
"followers_url": "https://api.github.com/users/JieyuZhao/followers",
"following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}",
"gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JieyuZhao",
"id": 22306304,
"login": "JieyuZhao",
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"organizations_url": "https://api.github.com/users/JieyuZhao/orgs",
"received_events_url": "https://api.github.com/users/JieyuZhao/received_events",
"repos_url": "https://api.github.com/users/JieyuZhao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JieyuZhao"
} | [] | closed | false | null | [] | null | [
"Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !",
"> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will... | 2021-02-23T03:07:40Z | 2021-04-07T15:24:56Z | 2021-04-07T15:24:56Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1930.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1930",
"merged_at": "2021-04-07T15:24:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1930.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1930"
} | Updated the wino_bias.py script.
- updated the data_url
- added different configurations for different data splits
- added the coreference_cluster to the data features | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1930/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4329/comments | https://api.github.com/repos/huggingface/datasets/issues/4329/events | https://github.com/huggingface/datasets/pull/4329 | 1,233,991,207 | PR_kwDODunzps43uIcF | 4,329 | Adding eval metadata for AG News | {
"avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4",
"events_url": "https://api.github.com/users/sashavor/events{/privacy}",
"followers_url": "https://api.github.com/users/sashavor/followers",
"following_url": "https://api.github.com/users/sashavor/following{/other_user}",
"gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sashavor",
"id": 14205986,
"login": "sashavor",
"node_id": "MDQ6VXNlcjE0MjA1OTg2",
"organizations_url": "https://api.github.com/users/sashavor/orgs",
"received_events_url": "https://api.github.com/users/sashavor/received_events",
"repos_url": "https://api.github.com/users/sashavor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sashavor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sashavor"
} | [] | closed | false | null | [] | null | [] | 2022-05-12T13:30:32Z | 2022-05-12T21:02:41Z | 2022-05-12T21:02:40Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4329.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4329",
"merged_at": "2022-05-12T21:02:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4329.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4329"
} | Adding eval metadata for AG News | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4329/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4104/comments | https://api.github.com/repos/huggingface/datasets/issues/4104/events | https://github.com/huggingface/datasets/issues/4104 | 1,194,072,966 | I_kwDODunzps5HLBuG | 4,104 | Add time series data - stock market | {
"avatar_url": "https://avatars.githubusercontent.com/u/45640029?v=4",
"events_url": "https://api.github.com/users/INF800/events{/privacy}",
"followers_url": "https://api.github.com/users/INF800/followers",
"following_url": "https://api.github.com/users/INF800/following{/other_user}",
"gists_url": "https://api.github.com/users/INF800/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/INF800",
"id": 45640029,
"login": "INF800",
"node_id": "MDQ6VXNlcjQ1NjQwMDI5",
"organizations_url": "https://api.github.com/users/INF800/orgs",
"received_events_url": "https://api.github.com/users/INF800/received_events",
"repos_url": "https://api.github.com/users/INF800/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/INF800/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/INF800/subscriptions",
"type": "User",
"url": "https://api.github.com/users/INF800"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | [
"Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ",
"cc'ing @kashif and @NielsRogge for visibility!",
"@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly poi... | 2022-04-06T05:46:58Z | 2022-04-11T09:07:10Z | null | NONE | null | null | null | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing.com
- **Motivation:** Test applicability of transformer based model on stock market / time series problem
 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4104/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/6024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6024/comments | https://api.github.com/repos/huggingface/datasets/issues/6024/events | https://github.com/huggingface/datasets/pull/6024 | 1,801,708,808 | PR_kwDODunzps5VWbGe | 6,024 | Don't reference self in Spark._validate_cache_dir | {
"avatar_url": "https://avatars.githubusercontent.com/u/106995444?v=4",
"events_url": "https://api.github.com/users/maddiedawson/events{/privacy}",
"followers_url": "https://api.github.com/users/maddiedawson/followers",
"following_url": "https://api.github.com/users/maddiedawson/following{/other_user}",
"gists_url": "https://api.github.com/users/maddiedawson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/maddiedawson",
"id": 106995444,
"login": "maddiedawson",
"node_id": "U_kgDOBmCe9A",
"organizations_url": "https://api.github.com/users/maddiedawson/orgs",
"received_events_url": "https://api.github.com/users/maddiedawson/received_events",
"repos_url": "https://api.github.com/users/maddiedawson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/maddiedawson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maddiedawson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/maddiedawson"
} | [] | closed | false | null | [] | null | [
"Ptal @lhoestq :) I tested this manually on a multi-node Databricks cluster",
"Hm looks like the check_code_quality failures are unrelated to me change... https://github.com/huggingface/datasets/actions/runs/5536162850/jobs/10103451883?pr=6024",
"_The documentation is not available anymore as the PR was closed ... | 2023-07-12T20:31:16Z | 2023-07-13T16:58:32Z | 2023-07-13T12:37:09Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6024",
"merged_at": "2023-07-13T12:37:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6024"
} | Fix for https://github.com/huggingface/datasets/issues/5963 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6024/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4353/comments | https://api.github.com/repos/huggingface/datasets/issues/4353/events | https://github.com/huggingface/datasets/pull/4353 | 1,236,092,176 | PR_kwDODunzps43016x | 4,353 | Don't strip proceeding hyphen | {
"avatar_url": "https://avatars.githubusercontent.com/u/8917831?v=4",
"events_url": "https://api.github.com/users/JohnGiorgi/events{/privacy}",
"followers_url": "https://api.github.com/users/JohnGiorgi/followers",
"following_url": "https://api.github.com/users/JohnGiorgi/following{/other_user}",
"gists_url": "https://api.github.com/users/JohnGiorgi/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/JohnGiorgi",
"id": 8917831,
"login": "JohnGiorgi",
"node_id": "MDQ6VXNlcjg5MTc4MzE=",
"organizations_url": "https://api.github.com/users/JohnGiorgi/orgs",
"received_events_url": "https://api.github.com/users/JohnGiorgi/received_events",
"repos_url": "https://api.github.com/users/JohnGiorgi/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/JohnGiorgi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/JohnGiorgi/subscriptions",
"type": "User",
"url": "https://api.github.com/users/JohnGiorgi"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-05-14T18:25:29Z | 2022-05-16T18:51:38Z | 2022-05-16T13:52:11Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4353.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4353",
"merged_at": "2022-05-16T13:52:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4353.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4353"
} | Closes #4320. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4353/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5328/comments | https://api.github.com/repos/huggingface/datasets/issues/5328/events | https://github.com/huggingface/datasets/pull/5328 | 1,471,661,437 | PR_kwDODunzps5EFAyT | 5,328 | Fix docs building for main | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"EDIT\r\nAt least the docs for ~~main~~ PR branch are now built:\r\n- https://github.com/huggingface/datasets/actions/runs/3594847760/jobs/6053620813",
"Build documentation for main branch was triggered after this PR being merged: h... | 2022-12-01T17:07:45Z | 2022-12-02T16:29:00Z | 2022-12-02T16:26:00Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5328",
"merged_at": "2022-12-02T16:26:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5328"
} | This PR reverts the triggering event for building documentation introduced by:
- #5250
Fix #5326. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5328/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2283/comments | https://api.github.com/repos/huggingface/datasets/issues/2283/events | https://github.com/huggingface/datasets/pull/2283 | 870,926,475 | MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5 | 2,283 | Initialize imdb dataset from don't stop pretraining paper | {
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/BobbyManion",
"id": 52530809,
"login": "BobbyManion",
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"type": "User",
"url": "https://api.github.com/users/BobbyManion"
} | [] | closed | false | null | [] | null | [] | 2021-04-29T11:44:54Z | 2021-04-29T11:50:24Z | 2021-04-29T11:50:24Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2283",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2283"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2283/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1753/comments | https://api.github.com/repos/huggingface/datasets/issues/1753/events | https://github.com/huggingface/datasets/pull/1753 | 789,867,685 | MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx | 1,753 | fix comet citations | {
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"events_url": "https://api.github.com/users/ricardorei/events{/privacy}",
"followers_url": "https://api.github.com/users/ricardorei/followers",
"following_url": "https://api.github.com/users/ricardorei/following{/other_user}",
"gists_url": "https://api.github.com/users/ricardorei/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ricardorei",
"id": 17256847,
"login": "ricardorei",
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"organizations_url": "https://api.github.com/users/ricardorei/orgs",
"received_events_url": "https://api.github.com/users/ricardorei/received_events",
"repos_url": "https://api.github.com/users/ricardorei/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ricardorei/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ricardorei/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ricardorei"
} | [] | closed | false | null | [] | null | [] | 2021-01-20T10:52:38Z | 2021-01-20T14:39:30Z | 2021-01-20T14:39:30Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"merged_at": "2021-01-20T14:39:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1753"
} | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1753/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5806/comments | https://api.github.com/repos/huggingface/datasets/issues/5806/events | https://github.com/huggingface/datasets/issues/5806 | 1,688,598,095 | I_kwDODunzps5kpfZP | 5,806 | Return the name of the currently loaded file in the load_dataset function. | {
"avatar_url": "https://avatars.githubusercontent.com/u/16948304?v=4",
"events_url": "https://api.github.com/users/s-JoL/events{/privacy}",
"followers_url": "https://api.github.com/users/s-JoL/followers",
"following_url": "https://api.github.com/users/s-JoL/following{/other_user}",
"gists_url": "https://api.github.com/users/s-JoL/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/s-JoL",
"id": 16948304,
"login": "s-JoL",
"node_id": "MDQ6VXNlcjE2OTQ4MzA0",
"organizations_url": "https://api.github.com/users/s-JoL/orgs",
"received_events_url": "https://api.github.com/users/s-JoL/received_events",
"repos_url": "https://api.github.com/users/s-JoL/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/s-JoL/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/s-JoL/subscriptions",
"type": "User",
"url": "https://api.github.com/users/s-JoL"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4",
"events_url": "https://api.github.com/users/tsabbir96/events{/privacy}",
"followers_url": "https://api.github.com/users/tsabbir96/followers",
"following_url": "https://api.github.com/users/tsabbir96/following{/other_user}",
"gists_url": "https://api.github.com/users/tsabbir96/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/tsabbir96",
"id": 49894149,
"login": "tsabbir96",
"node_id": "MDQ6VXNlcjQ5ODk0MTQ5",
"organizations_url": "https://api.github.com/users/tsabbir96/orgs",
"received_events_url": "https://api.github.com/users/tsabbir96/received_events",
"repos_url": "https://api.github.com/users/tsabbir96/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/tsabbir96/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsabbir96/subscriptions",
"type": "User",
"url": "https://api.github.com/users/tsabbir96"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/49894149?v=4",
"events_url": "https://api.github.com/users/tsabbir96/events{/privacy}",
"followers_url": "https://api.github.com/users/tsabbir96/followers",
"following_url": "https://api.github.com/users/tsabbir96/following{/other_user}",
... | null | [
"Implementing this makes sense (e.g., `tensorflow_datasets`' imagefolder returns image filenames). Also, in Datasets 3.0, we plan only to store the bytes of an image/audio, not its path, so this feature would be useful when the path info is still needed.",
"Hey @mariosasko, Can I work on this issue, this one seem... | 2023-04-28T13:50:15Z | 2023-09-29T17:49:53Z | null | NONE | null | null | null | ### Feature request
Add an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
### Motivation
When training large language models, machine problems may interrupt the training process. In such cases, it is common to load a previously saved checkpoint to resume training. I would like to be able to obtain the names of the previously trained data shards, so that I can skip these parts of the data during continued training to avoid overfitting and redundant training time.
### Your contribution
I currently use a dataset in jsonl format, so I am primarily interested in the json format. I suggest adding the file name to the returned table here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5806/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1934 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1934/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1934/comments | https://api.github.com/repos/huggingface/datasets/issues/1934/events | https://github.com/huggingface/datasets/issues/1934 | 814,437,190 | MDU6SXNzdWU4MTQ0MzcxOTA= | 1,934 | Add Stanford Sentiment Treebank (SST) | {
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patpizio",
"id": 15801338,
"login": "patpizio",
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"repos_url": "https://api.github.com/users/patpizio/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patpizio"
} | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | [] | null | [
"Dataset added in release [1.5.0](https://github.com/huggingface/datasets/releases/tag/1.5.0), I think I can close this."
] | 2021-02-23T12:53:16Z | 2021-03-18T17:51:44Z | 2021-03-18T17:51:44Z | CONTRIBUTOR | null | null | null | I am going to add SST:
- **Name:** The Stanford Sentiment Treebank
- **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language
- **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf)
- **Data:** https://nlp.stanford.edu/sentiment/index.html
- **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification
What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where:
- the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1}
- the labels of the *sub-sentences* were included only in the training set
- the labels in the test set are obfuscated
So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset.
Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous.
I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1934/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1934/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1366/comments | https://api.github.com/repos/huggingface/datasets/issues/1366/events | https://github.com/huggingface/datasets/pull/1366 | 760,205,506 | MDExOlB1bGxSZXF1ZXN0NTM1MDc1ODU2 | 1,366 | Adding Hope EDI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7421838?v=4",
"events_url": "https://api.github.com/users/jamespaultg/events{/privacy}",
"followers_url": "https://api.github.com/users/jamespaultg/followers",
"following_url": "https://api.github.com/users/jamespaultg/following{/other_user}",
"gists_url": "https://api.github.com/users/jamespaultg/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jamespaultg",
"id": 7421838,
"login": "jamespaultg",
"node_id": "MDQ6VXNlcjc0MjE4Mzg=",
"organizations_url": "https://api.github.com/users/jamespaultg/orgs",
"received_events_url": "https://api.github.com/users/jamespaultg/received_events",
"repos_url": "https://api.github.com/users/jamespaultg/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jamespaultg/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jamespaultg/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jamespaultg"
} | [] | closed | false | null | [] | null | [
"@lhoestq Have addressed your comments. Please review. Thanks."
] | 2020-12-09T10:30:23Z | 2020-12-14T14:27:57Z | 2020-12-14T14:27:57Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1366.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1366",
"merged_at": "2020-12-14T14:27:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1366.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1366"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1366/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/3993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3993/comments | https://api.github.com/repos/huggingface/datasets/issues/3993/events | https://github.com/huggingface/datasets/issues/3993 | 1,178,201,495 | I_kwDODunzps5GOe2X | 3,993 | Streaming dataset + interleave + DataLoader hangs with multiple workers | {
"avatar_url": "https://avatars.githubusercontent.com/u/614861?v=4",
"events_url": "https://api.github.com/users/jpilaul/events{/privacy}",
"followers_url": "https://api.github.com/users/jpilaul/followers",
"following_url": "https://api.github.com/users/jpilaul/following{/other_user}",
"gists_url": "https://api.github.com/users/jpilaul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jpilaul",
"id": 614861,
"login": "jpilaul",
"node_id": "MDQ6VXNlcjYxNDg2MQ==",
"organizations_url": "https://api.github.com/users/jpilaul/orgs",
"received_events_url": "https://api.github.com/users/jpilaul/received_events",
"repos_url": "https://api.github.com/users/jpilaul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jpilaul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jpilaul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jpilaul"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Same thing occurs when streaming files loaded from disk.",
"Hi ! Thanks for reporting, could this be related to https://github.com/huggingface/datasets/issues/3950 ?\r\n\r\nCurrently streaming datasets only works in single process, but we're working on having in work in distributed setups as well :) (EDIT: done)... | 2022-03-23T14:27:29Z | 2023-02-28T14:14:24Z | null | NONE | null | null | null | ## Describe the bug
Interleaving multiple iterable datasets that use `load_dataset` on streaming mode hangs when passed to `torch.utils.data.DataLoader` with multiple workers.
## Steps to reproduce the bug
```python
from datasets import interleave_datasets, load_dataset
from torch.utils.data import DataLoader
en_dataset = load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True)
fr_dataset = load_dataset('oscar', "unshuffled_deduplicated_fr", split='train', streaming=True)
it_dataset = load_dataset('oscar', "unshuffled_deduplicated_it", split='train', streaming=True)
de_dataset = load_dataset('oscar', "unshuffled_deduplicated_de", split='train', streaming=True)
multilingual_dataset = interleave_datasets([en_dataset, fr_dataset, de_dataset, it_dataset])
multilingual_dataset = multilingual_dataset.with_format('torch')
next(iter(multilingual_dataset)) # works fairly fast
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=4)
for batch in dataloader:
print(len(batch)) # prints nothing after 30 min of waiting
dataloader = DataLoader(multilingual_dataset, batch_size=8, num_workers=0)
for batch in dataloader:
print(len(batch)) # prints right away
```
## Expected results
It should be able to iterate the dataset with multiple workers.
## Actual results
Prints with results with `next(iter(multilingual_dataset)) ` and `num_workers=0` but it prints nothing with `num_workers=4` or any number above 0.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.1.dev0
- `pytorch` version: 1.10.0+cu113
- Python version: 3.7
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3993/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1332/comments | https://api.github.com/repos/huggingface/datasets/issues/1332/events | https://github.com/huggingface/datasets/pull/1332 | 759,679,135 | MDExOlB1bGxSZXF1ZXN0NTM0NjQxOTE5 | 1,332 | Add Open Subtitles Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [] | closed | false | null | [] | null | [] | 2020-12-08T18:31:45Z | 2020-12-10T11:17:38Z | 2020-12-10T11:13:18Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1332",
"merged_at": "2020-12-10T11:13:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1332"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1332/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/2527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2527/comments | https://api.github.com/repos/huggingface/datasets/issues/2527/events | https://github.com/huggingface/datasets/pull/2527 | 926,031,525 | MDExOlB1bGxSZXF1ZXN0Njc0MzkzNjQ5 | 2,527 | Replace bad `n>1M` size tag | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-06-21T09:42:35Z | 2021-06-21T15:06:50Z | 2021-06-21T15:06:49Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2527.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2527",
"merged_at": "2021-06-21T15:06:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2527.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2527"
} | Some datasets were still using the old `n>1M` tag which has been replaced with tags `1M<n<10M`, etc.
This resulted in unexpected results when searching for datasets bigger than 1M on the hub, since it was only showing the ones with the tag `n>1M`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2527/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2527/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4341/comments | https://api.github.com/repos/huggingface/datasets/issues/4341/events | https://github.com/huggingface/datasets/issues/4341 | 1,234,739,703 | I_kwDODunzps5JmKH3 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2022-05-13T04:55:17Z | 2022-05-13T05:47:41Z | 2022-05-13T05:47:41Z | MEMBER | null | null | null | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4341/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5781/comments | https://api.github.com/repos/huggingface/datasets/issues/5781/events | https://github.com/huggingface/datasets/issues/5781 | 1,679,580,460 | I_kwDODunzps5kHF0s | 5,781 | Error using `load_datasets` | {
"avatar_url": "https://avatars.githubusercontent.com/u/61463108?v=4",
"events_url": "https://api.github.com/users/gjyoungjr/events{/privacy}",
"followers_url": "https://api.github.com/users/gjyoungjr/followers",
"following_url": "https://api.github.com/users/gjyoungjr/following{/other_user}",
"gists_url": "https://api.github.com/users/gjyoungjr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gjyoungjr",
"id": 61463108,
"login": "gjyoungjr",
"node_id": "MDQ6VXNlcjYxNDYzMTA4",
"organizations_url": "https://api.github.com/users/gjyoungjr/orgs",
"received_events_url": "https://api.github.com/users/gjyoungjr/received_events",
"repos_url": "https://api.github.com/users/gjyoungjr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gjyoungjr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gjyoungjr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gjyoungjr"
} | [] | closed | false | null | [] | null | [
"It looks like an issue with your installation of scipy, can you try reinstalling it ?",
"Sorry for the late reply, but that worked @lhoestq . Thanks for the assist."
] | 2023-04-22T15:10:44Z | 2023-05-02T23:41:25Z | 2023-05-02T23:41:25Z | NONE | null | null | null | ### Describe the bug
I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error.
```
ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so
Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache)
```
### Steps to reproduce the bug
Run the `load_datasets` function
### Expected behavior
I expected the dataset to be loaded into my notebook.
### Environment info
name: review_sense
channels:
- apple
- conda-forge
dependencies:
- python=3.8
- pip>=19.0
- jupyter
- tensorflow-deps
#- scikit-learn
#- scipy
- pandas
- pandas-datareader
- matplotlib
- pillow
- tqdm
- requests
- h5py
- pyyaml
- flask
- boto3
- ipykernel
- seaborn
- pip:
- tensorflow-macos==2.9
- tensorflow-metal==0.5.0
- bayesian-optimization
- gym
- kaggle
- huggingface_hub
- datasets
- numpy
- huggingface
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5781/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4297/comments | https://api.github.com/repos/huggingface/datasets/issues/4297/events | https://github.com/huggingface/datasets/issues/4297 | 1,229,735,498 | I_kwDODunzps5JTEZK | 4,297 | Datasets YAML tagging space is down | {
"avatar_url": "https://avatars.githubusercontent.com/u/121934?v=4",
"events_url": "https://api.github.com/users/leondz/events{/privacy}",
"followers_url": "https://api.github.com/users/leondz/followers",
"following_url": "https://api.github.com/users/leondz/following{/other_user}",
"gists_url": "https://api.github.com/users/leondz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/leondz",
"id": 121934,
"login": "leondz",
"node_id": "MDQ6VXNlcjEyMTkzNA==",
"organizations_url": "https://api.github.com/users/leondz/orgs",
"received_events_url": "https://api.github.com/users/leondz/received_events",
"repos_url": "https://api.github.com/users/leondz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/leondz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/leondz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/leondz"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"@lhoestq @albertvillanova `update-task-list` branch does not exist anymore, should point to `main` now i guess",
"Thanks for reporting, fixing it now",
"It's up again :)"
] | 2022-05-09T13:45:05Z | 2022-05-09T14:44:25Z | 2022-05-09T14:44:25Z | CONTRIBUTOR | null | null | null | ## Describe the bug
The neat hf spaces app for generating YAML tags for dataset `README.md`s is down
## Steps to reproduce the bug
1. Visit https://huggingface.co/spaces/huggingface/datasets-tagging
## Expected results
There'll be a HF spaces web app for generating dataset metadata YAML
## Actual results
There's an error message; here's the step where it breaks:
```
Step 18/29 : RUN pip install -r requirements.txt
---> Running in e88bfe7e7e0c
Defaulting to user installation because normal site-packages is not writeable
Collecting git+https://github.com/huggingface/datasets.git@update-task-list (from -r requirements.txt (line 4))
Cloning https://github.com/huggingface/datasets.git (to revision update-task-list) to /tmp/pip-req-build-bm8t0r0k
Running command git clone --filter=blob:none --quiet https://github.com/huggingface/datasets.git /tmp/pip-req-build-bm8t0r0k
WARNING: Did not find branch or tag 'update-task-list', assuming revision or ref.
Running command git checkout -q update-task-list
error: pathspec 'update-task-list' did not match any file(s) known to git
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× git checkout -q update-task-list did not run successfully.
│ exit code: 1
╰─> See above for output.
```
## Environment info
- Platform: Linux / Brave
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4297/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1405/comments | https://api.github.com/repos/huggingface/datasets/issues/1405/events | https://github.com/huggingface/datasets/pull/1405 | 760,578,035 | MDExOlB1bGxSZXF1ZXN0NTM1Mzg2ODA1 | 1,405 | Adding TaPaCo Dataset with README.md | {
"avatar_url": "https://avatars.githubusercontent.com/u/13534540?v=4",
"events_url": "https://api.github.com/users/pacman100/events{/privacy}",
"followers_url": "https://api.github.com/users/pacman100/followers",
"following_url": "https://api.github.com/users/pacman100/following{/other_user}",
"gists_url": "https://api.github.com/users/pacman100/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/pacman100",
"id": 13534540,
"login": "pacman100",
"node_id": "MDQ6VXNlcjEzNTM0NTQw",
"organizations_url": "https://api.github.com/users/pacman100/orgs",
"received_events_url": "https://api.github.com/users/pacman100/received_events",
"repos_url": "https://api.github.com/users/pacman100/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/pacman100/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pacman100/subscriptions",
"type": "User",
"url": "https://api.github.com/users/pacman100"
} | [] | closed | false | null | [] | null | [
"We want to keep the repo as light as possible so that it doesn't take ages to clone, that's why we ask for small dummy data files (especially when there are many of them). Let me know if you have questions or if we can help you on this",
"Hello @lhoestq , made the changes as you suggested and pushed, please revi... | 2020-12-09T18:42:58Z | 2020-12-13T19:11:18Z | 2020-12-13T19:11:18Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1405.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1405",
"merged_at": "2020-12-13T19:11:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1405.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1405"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1405/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1405/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/5155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5155/comments | https://api.github.com/repos/huggingface/datasets/issues/5155/events | https://github.com/huggingface/datasets/pull/5155 | 1,421,278,748 | PR_kwDODunzps5BcCYr | 5,155 | TextConfig: added "errors" | {
"avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4",
"events_url": "https://api.github.com/users/NightMachinery/events{/privacy}",
"followers_url": "https://api.github.com/users/NightMachinery/followers",
"following_url": "https://api.github.com/users/NightMachinery/following{/other_user}",
"gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/NightMachinery",
"id": 36224762,
"login": "NightMachinery",
"node_id": "MDQ6VXNlcjM2MjI0NzYy",
"organizations_url": "https://api.github.com/users/NightMachinery/orgs",
"received_events_url": "https://api.github.com/users/NightMachinery/received_events",
"repos_url": "https://api.github.com/users/NightMachinery/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions",
"type": "User",
"url": "https://api.github.com/users/NightMachinery"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for adding this ! You can fix the CI by formatting your code using the `make style` command :)",
"[**@lhoestq**](https://github.com/lhoestq) commented on [Oct 27, 2022, 4:08 PM GMT+3:30](https://github.com/huggingface/datase... | 2022-10-24T18:56:52Z | 2022-11-03T13:38:13Z | 2022-11-03T13:35:35Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5155.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5155",
"merged_at": "2022-11-03T13:35:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5155.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5155"
} | This patch adds the ability to set the `errors` option of `open` for loading text datasets. I needed it because some data I had scraped had bad bytes in it, so I needed `errors='ignore'`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5155/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5155/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5263/comments | https://api.github.com/repos/huggingface/datasets/issues/5263/events | https://github.com/huggingface/datasets/issues/5263 | 1,455,252,626 | I_kwDODunzps5WvWSS | 5,263 | Save a dataset in a determined number of shards | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [] | 2022-11-18T14:43:54Z | 2022-12-14T18:22:59Z | 2022-12-14T18:22:59Z | MEMBER | null | null | null | This is useful to distribute the shards to training nodes.
This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5263/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2547/comments | https://api.github.com/repos/huggingface/datasets/issues/2547/events | https://github.com/huggingface/datasets/issues/2547 | 929,192,329 | MDU6SXNzdWU5MjkxOTIzMjk= | 2,547 | Dataset load_from_disk is too slow | {
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"events_url": "https://api.github.com/users/avacaondata/events{/privacy}",
"followers_url": "https://api.github.com/users/avacaondata/followers",
"following_url": "https://api.github.com/users/avacaondata/following{/other_user}",
"gists_url": "https://api.github.com/users/avacaondata/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/avacaondata",
"id": 35173563,
"login": "avacaondata",
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"organizations_url": "https://api.github.com/users/avacaondata/orgs",
"received_events_url": "https://api.github.com/users/avacaondata/received_events",
"repos_url": "https://api.github.com/users/avacaondata/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/avacaondata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/avacaondata/subscriptions",
"type": "User",
"url": "https://api.github.com/users/avacaondata"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi ! It looks like an issue with the virtual disk you are using.\r\n\r\nWe load datasets using memory mapping. In general it makes it possible to load very big files instantaneously since it doesn't have to read the file (it just assigns virtual memory to the file on disk).\r\nHowever there happens to be issues wi... | 2021-06-24T12:45:44Z | 2021-06-25T14:56:38Z | null | NONE | null | null | null | @lhoestq
## Describe the bug
It's not normal that I have to wait 7-8 hours for a dataset to be loaded from disk, as there are no preprocessing steps, it's only loading it with load_from_disk. I have 96 cpus, however only 1 is used for this, which is inefficient. Moreover, its usage is at 1%... This is happening in the context of a language model training, therefore I'm wasting 100$ each time I have to load the dataset from disk again (because the spot instance was stopped by aws and I need to relaunch it for example).
## Steps to reproduce the bug
Just get the oscar in spanish (around 150GGB) and try to first save in disk and then load the processed dataset. It's not dependent on the task you're doing, it just depends on the size of the text dataset.
## Expected results
I expect the dataset to be loaded in a normal time, by using the whole machine for loading it, I mean if you store the dataset in multiple files (.arrow) and then load it from multiple files, you can use multiprocessing for that and therefore don't waste so much time.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Ubuntu 18
- Python version: 3.8
I've seen you're planning to include a streaming mode for load_dataset, but that only saves the downloading and processing time, that's not being a problem for me, you cannot save the pure loading from disk time, therefore that's not a solution for my use case or for anyone who wants to use your library for training a language model. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2547/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1751/comments | https://api.github.com/repos/huggingface/datasets/issues/1751/events | https://github.com/huggingface/datasets/pull/1751 | 789,232,980 | MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2 | 1,751 | Updated README for the Social Bias Frames dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [] | closed | false | null | [] | null | [] | 2021-01-19T17:53:00Z | 2021-01-20T14:56:52Z | 2021-01-20T14:56:52Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"merged_at": "2021-01-20T14:56:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1751"
} | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1751/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5178/comments | https://api.github.com/repos/huggingface/datasets/issues/5178/events | https://github.com/huggingface/datasets/issues/5178 | 1,430,800,810 | I_kwDODunzps5VSEmq | 5,178 | Unable to download the Chinese `wikipedia`, the dumpstatus.json not found! | {
"avatar_url": "https://avatars.githubusercontent.com/u/37113676?v=4",
"events_url": "https://api.github.com/users/beyondguo/events{/privacy}",
"followers_url": "https://api.github.com/users/beyondguo/followers",
"following_url": "https://api.github.com/users/beyondguo/following{/other_user}",
"gists_url": "https://api.github.com/users/beyondguo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/beyondguo",
"id": 37113676,
"login": "beyondguo",
"node_id": "MDQ6VXNlcjM3MTEzNjc2",
"organizations_url": "https://api.github.com/users/beyondguo/orgs",
"received_events_url": "https://api.github.com/users/beyondguo/received_events",
"repos_url": "https://api.github.com/users/beyondguo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/beyondguo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/beyondguo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/beyondguo"
} | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"In the dumps page of the wiki (https://dumps.wikimedia.org/zhwiki/), I found the following dumps:\r\n```\r\nIndex of /zhwiki/\r\n[../](https://dumps.wikimedia.org/)\r\n[20220701/](https://dumps.wikimedia.org/zhwiki/20220701/) 21-Aug-2022 01:48 -\r\n[202207... | 2022-11-01T03:17:55Z | 2022-11-02T08:27:15Z | 2022-11-02T08:24:29Z | NONE | null | null | null | ### Describe the bug
I tried:
`data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')`
and
`data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')`
but both got:
`FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json`
the full report is:
```
FileNotFoundError Traceback (most recent call last)
<ipython-input-13-d07c5021090c> in <module>
1 from datasets import load_dataset
2
----> 3 data = load_dataset("wikipedia", language="zh", date="20220301", beam_runner='DirectRunner')<?, ?it/s]
/opt/conda/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
1740
1741 # Download and prepare data
-> 1742 builder_instance.download_and_prepare(
1743 download_config=download_config,
1744 download_mode=download_mode,
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in download_and_prepare(self, output_dir, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, storage_options, **download_and_prepare_kwargs)
812 **download_and_prepare_kwargs,
813 }
--> 814 self._download_and_prepare(
815 dl_manager=dl_manager,
816 verify_infos=verify_infos,
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_splits_kwargs)
1645 options=beam_options,
1646 )
-> 1647 super()._download_and_prepare(
1648 dl_manager, verify_infos=False, pipeline=pipeline, **prepare_splits_kwargs
1649 ) # TODO handle verify_infos in beam datasets
/opt/conda/lib/python3.8/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
881 split_dict = SplitDict(dataset_name=self.name)
882 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 883 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
884
885 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikipedia/aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559/wikipedia.py in _split_generators(self, dl_manager, pipeline)
943 info_url = _base_url(lang) + _INFO_FILE
944 # Use dictionary since testing mock always returns the same result.
--> 945 downloaded_files = dl_manager.download_and_extract({"info": info_url})
946
947 xml_urls = []
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download_and_extract(self, url_or_urls)
431 extracted_path(s): `str`, extracted paths of given URL(s).
432 """
--> 433 return self.extract(self.download(url_or_urls))
434
435 def get_recorded_sizes_checksums(self):
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in download(self, url_or_urls)
308
309 start_time = datetime.now()
--> 310 downloaded_path_or_paths = map_nested(
311 download_func,
312 url_or_urls,
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, parallel_min_length, types, disable_tqdm, desc)
427 num_proc = 1
428 if num_proc <= 1 or len(iterable) < parallel_min_length:
--> 429 mapped = [
430 _single_map_nested((function, obj, types, None, True, None))
431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in <listcomp>(.0)
428 if num_proc <= 1 or len(iterable) < parallel_min_length:
429 mapped = [
--> 430 _single_map_nested((function, obj, types, None, True, None))
431 for obj in logging.tqdm(iterable, disable=disable_tqdm, desc=desc)
432 ]
/opt/conda/lib/python3.8/site-packages/datasets/utils/py_utils.py in _single_map_nested(args)
329 # Singleton first to spare some computation
330 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 331 return function(data_struct)
332
333 # Reduce logging to keep things readable in multiprocessing with tqdm
/opt/conda/lib/python3.8/site-packages/datasets/download/download_manager.py in _download(self, url_or_filename, download_config)
335 # append the relative path to the base_path
336 url_or_filename = url_or_path_join(self._base_path, url_or_filename)
--> 337 return cached_path(url_or_filename, download_config=download_config)
338
339 def iter_archive(self, path_or_buf: Union[str, io.BufferedReader]):
/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
186 if is_remote_url(url_or_filename):
187 # URL, so get it from the cache (downloading if necessary)
--> 188 output_path = get_from_cache(
189 url_or_filename,
190 cache_dir=cache_dir,
/opt/conda/lib/python3.8/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token, ignore_url_params, download_desc)
533 )
534 elif response is not None and response.status_code == 404:
--> 535 raise FileNotFoundError(f"Couldn't find file at {url}")
536 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
537 if head_error is not None:
FileNotFoundError: Couldn't find file at https://dumps.wikimedia.org/zhwiki/20220301/dumpstatus.json
```
### Steps to reproduce the bug
`data = load_dataset('wikipedia', '20220301.zh', beam_runner='DirectRunner')`
### Expected behavior
download the data
### Environment info
python3.6
latest datasets/transformers version | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5178/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1711/comments | https://api.github.com/repos/huggingface/datasets/issues/1711/events | https://github.com/huggingface/datasets/pull/1711 | 782,129,083 | MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2 | 1,711 | Fix windows path scheme in cached path | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-01-08T13:45:56Z | 2021-01-11T09:23:20Z | 2021-01-11T09:23:19Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"merged_at": "2021-01-11T09:23:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711"
} | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1711/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1237/comments | https://api.github.com/repos/huggingface/datasets/issues/1237/events | https://github.com/huggingface/datasets/pull/1237 | 758,318,353 | MDExOlB1bGxSZXF1ZXN0NTMzNTExMDky | 1,237 | Add AmbigQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4",
"events_url": "https://api.github.com/users/cceyda/events{/privacy}",
"followers_url": "https://api.github.com/users/cceyda/followers",
"following_url": "https://api.github.com/users/cceyda/following{/other_user}",
"gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cceyda",
"id": 15624271,
"login": "cceyda",
"node_id": "MDQ6VXNlcjE1NjI0Mjcx",
"organizations_url": "https://api.github.com/users/cceyda/orgs",
"received_events_url": "https://api.github.com/users/cceyda/received_events",
"repos_url": "https://api.github.com/users/cceyda/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cceyda/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cceyda"
} | [] | closed | false | null | [] | null | [] | 2020-12-07T09:07:19Z | 2020-12-08T13:38:52Z | 2020-12-08T13:38:52Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1237.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1237",
"merged_at": "2020-12-08T13:38:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1237.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1237"
} | # AmbigQA: Answering Ambiguous Open-domain Questions Dataset
Adding the [AmbigQA](https://nlp.cs.washington.edu/ambigqa/) dataset as part of the sprint 🎉 (from Open dataset list for Dataset sprint)
Added both the light and full versions (as seen on the dataset homepage)
The json format changes based on the value of one 'type' field, so I set the unavailable field to an empty list. This is explained in the README -> Data Fields
```py
train_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="train")
val_light_dataset = load_dataset('./datasets/ambig_qa',"light",split="validation")
train_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="train")
val_full_dataset = load_dataset('./datasets/ambig_qa',"full",split="validation")
for example in train_light_dataset:
for i,t in enumerate(example['annotations']['type']):
if t =='singleAnswer':
# use the example['annotations']['answer'][i]
# example['annotations']['qaPairs'][i] - > is []
print(example['annotations']['answer'][i])
else:
# use the example['annotations']['qaPairs'][i]
# example['annotations']['answer'][i] - > is []
print(example['annotations']['qaPairs'][i])
```
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1237/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5287/comments | https://api.github.com/repos/huggingface/datasets/issues/5287/events | https://github.com/huggingface/datasets/pull/5287 | 1,461,971,889 | PR_kwDODunzps5Dkttf | 5,287 | Fix methods using `IterableDataset.map` that lead to `features=None` | {
"avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4",
"events_url": "https://api.github.com/users/alvarobartt/events{/privacy}",
"followers_url": "https://api.github.com/users/alvarobartt/followers",
"following_url": "https://api.github.com/users/alvarobartt/following{/other_user}",
"gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/alvarobartt",
"id": 36760800,
"login": "alvarobartt",
"node_id": "MDQ6VXNlcjM2NzYwODAw",
"organizations_url": "https://api.github.com/users/alvarobartt/orgs",
"received_events_url": "https://api.github.com/users/alvarobartt/received_events",
"repos_url": "https://api.github.com/users/alvarobartt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/alvarobartt"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"_The documentation is not available anymore as the PR was closed or merged._",
"Maybe other options are:\r\n* Keep the `info.features` to `None` if those were initially `None`\r\n* Infer the features with pre-fetching just if the `... | 2022-11-23T15:33:25Z | 2022-11-28T15:43:14Z | 2022-11-28T12:53:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5287.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5287",
"merged_at": "2022-11-28T12:53:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5287.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5287"
} | As currently `IterableDataset.map` is setting the `info.features` to `None` every time as we don't know the output of the dataset in advance, `IterableDataset` methods such as `rename_column`, `rename_columns`, and `remove_columns`. that internally use `map` lead to the features being `None`.
This PR is related to #3888, #5245, and #5284
## ✅ Current solution
The code in this PR is basically making sure that if the features were there since the beginning and a `rename_column`/`rename_columns` happens, those are kept and the rename is applied to the `Features` too. Also, if the features were not there before applying `rename_column`, `rename_columns` or `remove_columns`, a batch is prefetched and the features are being inferred (that could potentially be part of `IterableDataset.__init__` in case the `info.features` value is `None`).
## 💡 Ideas
Some ideas were proposed in https://github.com/huggingface/datasets/issues/3888, but probably the most consistent solution even though it may take some time is to actually do the type inferencing during the `IterableDataset.__init__` in case the provided `info.features` is `None`, otherwise, we can just use the provided features.
Additionally, as mentioned at https://github.com/huggingface/datasets/issues/3888, we could also include a `features` parameter to the `map` function, but that's probably more tedious.
Also thanks to @lhoestq for sharing some ideas in both https://github.com/huggingface/datasets/issues/3888 and https://github.com/huggingface/datasets/issues/5245 :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5287/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5287/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2570/comments | https://api.github.com/repos/huggingface/datasets/issues/2570/events | https://github.com/huggingface/datasets/pull/2570 | 933,402,521 | MDExOlB1bGxSZXF1ZXN0NjgwNjEzNzc0 | 2,570 | Minor fix docs format for bertscore | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-06-30T07:42:12Z | 2021-06-30T15:31:01Z | 2021-06-30T15:31:01Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2570",
"merged_at": "2021-06-30T15:31:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2570"
} | Minor fix docs format for bertscore:
- link to README
- format of KWARGS_DESCRIPTION | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2570/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2425 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2425/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2425/comments | https://api.github.com/repos/huggingface/datasets/issues/2425/events | https://github.com/huggingface/datasets/pull/2425 | 906,385,457 | MDExOlB1bGxSZXF1ZXN0NjU3NDAwMjM3 | 2,425 | Fix Docstring Mistake: dataset vs. metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4",
"events_url": "https://api.github.com/users/PhilipMay/events{/privacy}",
"followers_url": "https://api.github.com/users/PhilipMay/followers",
"following_url": "https://api.github.com/users/PhilipMay/following{/other_user}",
"gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PhilipMay",
"id": 229382,
"login": "PhilipMay",
"node_id": "MDQ6VXNlcjIyOTM4Mg==",
"organizations_url": "https://api.github.com/users/PhilipMay/orgs",
"received_events_url": "https://api.github.com/users/PhilipMay/received_events",
"repos_url": "https://api.github.com/users/PhilipMay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PhilipMay"
} | [] | closed | false | null | [] | null | [
"IMO this PR is ready for review. I do not know why tests fail...",
"The CI fail is unrelated to this PR, and it has been fixed on master, merging :)",
"> I just have one comment: we use rouge, not rogue :p\r\n\r\nOops!",
"rebased on master"
] | 2021-05-29T06:09:53Z | 2021-06-01T08:18:04Z | 2021-06-01T08:18:04Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2425.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2425",
"merged_at": "2021-06-01T08:18:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2425.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2425"
} | PR to fix #2412 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2425/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2425/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4713/comments | https://api.github.com/repos/huggingface/datasets/issues/4713/events | https://github.com/huggingface/datasets/pull/4713 | 1,309,184,756 | PR_kwDODunzps47ojC1 | 4,713 | Document installation of sox OS dependency for audio | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-19T08:42:35Z | 2022-07-21T08:16:59Z | 2022-07-21T08:04:15Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4713.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4713",
"merged_at": "2022-07-21T08:04:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4713.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4713"
} | The `sox` OS package needs being installed manually using the distribution package manager.
This PR adds this explanation to the docs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4713/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4713/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1484 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1484/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1484/comments | https://api.github.com/repos/huggingface/datasets/issues/1484/events | https://github.com/huggingface/datasets/pull/1484 | 762,747,096 | MDExOlB1bGxSZXF1ZXN0NTM3MjYzMDc5 | 1,484 | Add peer-read dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/34424769?v=4",
"events_url": "https://api.github.com/users/vinaykudari/events{/privacy}",
"followers_url": "https://api.github.com/users/vinaykudari/followers",
"following_url": "https://api.github.com/users/vinaykudari/following{/other_user}",
"gists_url": "https://api.github.com/users/vinaykudari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vinaykudari",
"id": 34424769,
"login": "vinaykudari",
"node_id": "MDQ6VXNlcjM0NDI0NzY5",
"organizations_url": "https://api.github.com/users/vinaykudari/orgs",
"received_events_url": "https://api.github.com/users/vinaykudari/received_events",
"repos_url": "https://api.github.com/users/vinaykudari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vinaykudari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vinaykudari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vinaykudari"
} | [] | closed | false | null | [] | null | [
"> Cool thank you !\r\n> \r\n> I left a few comments\r\n\r\nThank you @lhoestq addressed your comments. Haven't changed the code but I see that tests are failing now. Do I need to rebase or something? ",
"The CI error is not related to your dataset and is fixed on master.\r\nYou can ignore it"
] | 2020-12-11T18:43:44Z | 2020-12-21T09:40:50Z | 2020-12-21T09:40:50Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1484.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1484",
"merged_at": "2020-12-21T09:40:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1484.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1484"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1484/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1484/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/3650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3650/comments | https://api.github.com/repos/huggingface/datasets/issues/3650/events | https://github.com/huggingface/datasets/pull/3650 | 1,118,537,429 | PR_kwDODunzps4xyr2o | 3,650 | Allow 'to_json' to run in unordered fashion in order to lower memory footprint | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
} | [] | closed | false | null | [] | null | [
"Hi @thomasw21, I remember suggesting `imap_unordered` to @lhoestq at that time to speed up `to_json` further but after trying `pool_imap` on multiple datasets (>9GB) , memory utilisation was almost constant and we decided to go ahead with that only. \r\n\r\n1. Did you try this without `gzip`? Because `gzip` featu... | 2022-01-30T13:23:19Z | 2023-09-25T06:28:51Z | 2023-09-24T16:45:48Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3650.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3650",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3650.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3650"
} | I'm using `to_json(..., num_proc=num_proc, compressiong='gzip')` with `num_proc>1`. I'm having an issue where things seem to deadlock at some point. Eventually I see OOM. I'm guessing it's an issue where one process starts to take a long time for a specific batch, and so other process keep accumulating their results in memory.
In order to flush memory, I propose we use optional `imap_unordered`. This will prevent one process to block the other ones. The logical thinking is that index are rarily relevant, and in one wants to keep an index, one can still create another column and reconstruct from there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3650/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3650/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2154/comments | https://api.github.com/repos/huggingface/datasets/issues/2154/events | https://github.com/huggingface/datasets/pull/2154 | 846,763,960 | MDExOlB1bGxSZXF1ZXN0NjA1ODM2Mjc1 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | {
"avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4",
"events_url": "https://api.github.com/users/versae/events{/privacy}",
"followers_url": "https://api.github.com/users/versae/followers",
"following_url": "https://api.github.com/users/versae/following{/other_user}",
"gists_url": "https://api.github.com/users/versae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/versae",
"id": 173537,
"login": "versae",
"node_id": "MDQ6VXNlcjE3MzUzNw==",
"organizations_url": "https://api.github.com/users/versae/orgs",
"received_events_url": "https://api.github.com/users/versae/received_events",
"repos_url": "https://api.github.com/users/versae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/versae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/versae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/versae"
} | [] | closed | false | null | [] | null | [
"Awesome!"
] | 2021-03-31T14:22:50Z | 2021-04-01T09:27:00Z | 2021-04-01T09:16:08Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2154.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2154",
"merged_at": "2021-04-01T09:16:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2154.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2154"
} | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, organizations, locations, geo-political entities, products, and events, in addition to a class corresponding to nominals derived from names.
See #1720. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2154/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2154/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2085 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2085/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2085/comments | https://api.github.com/repos/huggingface/datasets/issues/2085/events | https://github.com/huggingface/datasets/pull/2085 | 835,870,994 | MDExOlB1bGxSZXF1ZXN0NTk2NDYyOTc2 | 2,085 | Fix max_wait_time in requests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-03-19T11:22:26Z | 2021-03-23T15:36:38Z | 2021-03-23T15:36:37Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2085.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2085",
"merged_at": "2021-03-23T15:36:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2085.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2085"
} | it was handled as a min time, not max cc @SBrandeis | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2085/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2085/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4221/comments | https://api.github.com/repos/huggingface/datasets/issues/4221/events | https://github.com/huggingface/datasets/issues/4221 | 1,215,911,182 | I_kwDODunzps5IeVUO | 4,221 | Dictionary Feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/2944532?v=4",
"events_url": "https://api.github.com/users/jordiae/events{/privacy}",
"followers_url": "https://api.github.com/users/jordiae/followers",
"following_url": "https://api.github.com/users/jordiae/following{/other_user}",
"gists_url": "https://api.github.com/users/jordiae/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jordiae",
"id": 2944532,
"login": "jordiae",
"node_id": "MDQ6VXNlcjI5NDQ1MzI=",
"organizations_url": "https://api.github.com/users/jordiae/orgs",
"received_events_url": "https://api.github.com/users/jordiae/received_events",
"repos_url": "https://api.github.com/users/jordiae/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jordiae/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jordiae/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jordiae"
} | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @jordiae,\r\n\r\nInstead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:\r\n```python\r\n\"list_of_dict_feature\": [\r\n {\r\n \"key1_in_dict\": datasets.Value(\"string\"),\r\n \"key2_in_dict\": datasets.Value(\"int32\"),\r\n ...\r\n }\r\n... | 2022-04-26T12:50:18Z | 2022-04-29T14:52:19Z | 2022-04-28T17:04:58Z | NONE | null | null | null | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4221/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1339/comments | https://api.github.com/repos/huggingface/datasets/issues/1339/events | https://github.com/huggingface/datasets/pull/1339 | 759,744,088 | MDExOlB1bGxSZXF1ZXN0NTM0Njk0NDI4 | 1,339 | hate_speech_18 initial commit | {
"avatar_url": "https://avatars.githubusercontent.com/u/75574105?v=4",
"events_url": "https://api.github.com/users/czabo/events{/privacy}",
"followers_url": "https://api.github.com/users/czabo/followers",
"following_url": "https://api.github.com/users/czabo/following{/other_user}",
"gists_url": "https://api.github.com/users/czabo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/czabo",
"id": 75574105,
"login": "czabo",
"node_id": "MDQ6VXNlcjc1NTc0MTA1",
"organizations_url": "https://api.github.com/users/czabo/orgs",
"received_events_url": "https://api.github.com/users/czabo/received_events",
"repos_url": "https://api.github.com/users/czabo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/czabo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/czabo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/czabo"
} | [] | closed | false | null | [] | null | [
"> Nice thanks !\r\n> \r\n> Can you rename the dataset folder and the dataset script name `hate_speech18` instead of `hate_speech_18` to follow the snake case convention we're using ?\r\n> \r\n> Also it looks like the dummy_data.zip file is quite big (almost 4MB).\r\n> Can you try to reduce its size ?\r\n> \r\n> To... | 2020-12-08T20:10:08Z | 2020-12-12T16:17:32Z | 2020-12-12T16:17:32Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1339",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1339"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1339/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/6140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6140/comments | https://api.github.com/repos/huggingface/datasets/issues/6140/events | https://github.com/huggingface/datasets/issues/6140 | 1,845,384,712 | I_kwDODunzps5t_lYI | 6,140 | Misalignment between file format specified in configs metadata YAML and the inferred builder | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2023-08-10T15:07:34Z | 2023-08-17T20:37:20Z | 2023-08-17T20:37:20Z | MEMBER | null | null | null | There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV):
```yaml
configs:
- config_name: default
data_files:
- split: train
path: data.csv
```
and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not appear in the configs metadata YAML.
See: https://huggingface.co/datasets/freddyaboulton/chatinterface_with_image_csv/discussions/1
CC: @freddyaboulton @polinaeterna | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6140/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2278/comments | https://api.github.com/repos/huggingface/datasets/issues/2278/events | https://github.com/huggingface/datasets/issues/2278 | 870,088,059 | MDU6SXNzdWU4NzAwODgwNTk= | 2,278 | Loss result inGptNeoForCasual | {
"avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4",
"events_url": "https://api.github.com/users/Yossillamm/events{/privacy}",
"followers_url": "https://api.github.com/users/Yossillamm/followers",
"following_url": "https://api.github.com/users/Yossillamm/following{/other_user}",
"gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Yossillamm",
"id": 51174606,
"login": "Yossillamm",
"node_id": "MDQ6VXNlcjUxMTc0NjA2",
"organizations_url": "https://api.github.com/users/Yossillamm/orgs",
"received_events_url": "https://api.github.com/users/Yossillamm/received_events",
"repos_url": "https://api.github.com/users/Yossillamm/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Yossillamm"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library"
] | 2021-04-28T15:39:52Z | 2021-05-06T16:14:23Z | 2021-05-06T16:14:23Z | NONE | null | null | null | Is there any way you give the " loss" and "logits" results in the gpt neo api? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2278/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3561/comments | https://api.github.com/repos/huggingface/datasets/issues/3561/events | https://github.com/huggingface/datasets/issues/3561 | 1,098,328,870 | I_kwDODunzps5Bdysm | 3,561 | Cannot load ‘bookcorpusopen’ | {
"avatar_url": "https://avatars.githubusercontent.com/u/54684403?v=4",
"events_url": "https://api.github.com/users/HUIYINXUE/events{/privacy}",
"followers_url": "https://api.github.com/users/HUIYINXUE/followers",
"following_url": "https://api.github.com/users/HUIYINXUE/following{/other_user}",
"gists_url": "https://api.github.com/users/HUIYINXUE/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/HUIYINXUE",
"id": 54684403,
"login": "HUIYINXUE",
"node_id": "MDQ6VXNlcjU0Njg0NDAz",
"organizations_url": "https://api.github.com/users/HUIYINXUE/orgs",
"received_events_url": "https://api.github.com/users/HUIYINXUE/received_events",
"repos_url": "https://api.github.com/users/HUIYINXUE/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/HUIYINXUE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/HUIYINXUE/subscriptions",
"type": "User",
"url": "https://api.github.com/users/HUIYINXUE"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"descrip... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/))\r\n\r\nFinding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset so... | 2022-01-10T20:17:18Z | 2022-02-14T09:19:27Z | 2022-02-14T09:18:47Z | NONE | null | null | null | ## Describe the bug
Cannot load 'bookcorpusopen'
## Steps to reproduce the bug
```python
dataset = load_dataset('bookcorpusopen')
```
or
```python
dataset = load_dataset('bookcorpusopen',script_version='master')
```
## Actual results
ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_preliminary_components/books1.tar.gz
## Environment info
- `datasets` version: 1.9.0
- Platform: Linux version 3.10.0-1160.45.1.el7.x86_64
- Python version: 3.6.13
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3561/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3473/comments | https://api.github.com/repos/huggingface/datasets/issues/3473/events | https://github.com/huggingface/datasets/issues/3473 | 1,086,937,610 | I_kwDODunzps5AyVoK | 3,473 | Iterating over a vision dataset doesn't decode the images | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "bfdadc",
"default": false,
"descrip... | closed | false | null | [] | null | [
"As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the \"audio\" field (for Audio feature) was accessed.",
"> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only ... | 2021-12-22T15:26:32Z | 2021-12-27T14:13:21Z | 2021-12-23T15:21:57Z | MEMBER | null | null | null | ## Describe the bug
If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import PIL
mnist = load_dataset("mnist", split="train")
first_image = mnist[0]["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # passes
first_image = next(iter(mnist))["image"]
assert isinstance(first_image, PIL.PngImagePlugin.PngImageFile) # fails
```
## Expected results
The image should be decoded, as a PIL Image
## Actual results
We get a dictionary
```
{'bytes': b'\x89PNG\r\n\x1a\n\x00..., 'path': None}
```
## Environment info
- `datasets` version: 1.17.1.dev0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.2
- PyArrow version: 6.0.0
The bug also exists in 1.17.0
## Investigation
I think the issue is that decoding is disabled in `__iter__`:
https://github.com/huggingface/datasets/blob/dfe5b73387c5e27de6a16b0caeb39d3b9ded66d6/src/datasets/arrow_dataset.py#L1651-L1661
Do you remember why it was disabled in the first place @albertvillanova ?
Also cc @mariosasko @NielsRogge
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3473/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1924/comments | https://api.github.com/repos/huggingface/datasets/issues/1924/events | https://github.com/huggingface/datasets/issues/1924 | 813,599,733 | MDU6SXNzdWU4MTM1OTk3MzM= | 1,924 | Anonymous Dataset Addition (i.e Anonymous PR?) | {
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/PierreColombo",
"id": 22492839,
"login": "PierreColombo",
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/PierreColombo"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok",
"Hello,\r\nI would prefer to do the reverse: adding a link to an anony... | 2021-02-22T15:22:30Z | 2022-10-05T13:07:11Z | 2022-10-05T13:07:11Z | CONTRIBUTOR | null | null | null | Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1924/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1924/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2249/comments | https://api.github.com/repos/huggingface/datasets/issues/2249/events | https://github.com/huggingface/datasets/pull/2249 | 865,257,826 | MDExOlB1bGxSZXF1ZXN0NjIxMzU1MzE3 | 2,249 | Allow downloading/processing/caching only specific splits | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-30T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"id": 6968069,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"open_issues": 4,
"state": "open",
"title": "1.12",
"updated_at": "2021-10-13T10:26:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8"
} | [
"> If you pass a dictionary like this:\r\n> \r\n> ```\r\n> {\"main_metadata\": url_to_main_data,\r\n> \"secondary_metadata\": url_to_sec_data,\r\n> \"train\": url_train_data,\r\n> \"test\": url_test_data}\r\n> ```\r\n> \r\n> then only the train or test keys will be kept, which I feel not intuitive.\r\n> \r\n> For e... | 2021-04-22T17:51:44Z | 2022-07-06T15:19:48Z | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2249.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2249",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2249.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2249"
} | Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (which is more intricate as it depends on the user-defined method `_split_generators`)
This PR makes several assumptions:
- `DownloadConfig` contains the configuration settings for downloading
- the parameter `split` passed to `load_dataset` is just a parameter for loading (from cache), not for downloading | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2249/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1616/comments | https://api.github.com/repos/huggingface/datasets/issues/1616/events | https://github.com/huggingface/datasets/pull/1616 | 772,074,229 | MDExOlB1bGxSZXF1ZXN0NTQzNDEwNDc1 | 1,616 | added TurkishMovieSentiment dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yavuzKomecoglu",
"id": 5150963,
"login": "yavuzKomecoglu",
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yavuzKomecoglu"
} | [] | closed | false | null | [] | null | [
"> I just generated the dataset_infos.json file\r\n> \r\n> Thanks for adding this one !\r\n\r\nThank you very much for your support."
] | 2020-12-21T11:03:16Z | 2020-12-24T07:08:41Z | 2020-12-23T16:50:06Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1616.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1616",
"merged_at": "2020-12-23T16:50:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1616.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1616"
} | This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.**
- **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks)
- **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1616/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1616/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6059 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6059/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6059/comments | https://api.github.com/repos/huggingface/datasets/issues/6059/events | https://github.com/huggingface/datasets/issues/6059 | 1,816,537,176 | I_kwDODunzps5sRihY | 6,059 | Provide ability to load label mappings from file | {
"avatar_url": "https://avatars.githubusercontent.com/u/5028974?v=4",
"events_url": "https://api.github.com/users/david-waterworth/events{/privacy}",
"followers_url": "https://api.github.com/users/david-waterworth/followers",
"following_url": "https://api.github.com/users/david-waterworth/following{/other_user}",
"gists_url": "https://api.github.com/users/david-waterworth/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/david-waterworth",
"id": 5028974,
"login": "david-waterworth",
"node_id": "MDQ6VXNlcjUwMjg5NzQ=",
"organizations_url": "https://api.github.com/users/david-waterworth/orgs",
"received_events_url": "https://api.github.com/users/david-waterworth/received_events",
"repos_url": "https://api.github.com/users/david-waterworth/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/david-waterworth/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/david-waterworth/subscriptions",
"type": "User",
"url": "https://api.github.com/users/david-waterworth"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [] | 2023-07-22T02:04:19Z | 2023-07-22T02:04:19Z | null | NONE | null | null | null | ### Feature request
My task is classification of a dataset containing a large label set that includes a hierarchy. Even ignoring the hierarchy I'm not able to find an example using `datasets` where the label names aren't hard-coded. This works find for classification of a handful of labels but ideally there would be a way of loading the name/id mappings required for `datasets.features.ClassLabel` from a file.
It is possible to pass a file to ClassLabel but I cannot see an easy way of using this with `GeneratorBasedBuilder` since `self._info` is called before the `dl_manager` is constructed so even if my dataset contains say `label_mappings.json` there's no way of loading it in order to construct the `datasets.DatasetInfo`
I can see other uses to accessing the `download_manager` from `self._info` - i.e. if the files contain a schema (i.e. `arrow` or `parquet` files) the `datasets.DatasetInfo` could be inferred.
The workaround that was suggested in the forum is to generate a `.py` file from the `label_mappings.json` and import it.
```
class TestDatasetBuilder(datasets.GeneratorBasedBuilder):
VERSION = datasets.Version("1.0.0")
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=datasets.Features(
{
"text": datasets.Value("string"),
"label": datasets.features.ClassLabel(names=["label_1", "label_2"]),
}
),
task_templates=[TextClassification(text_column="text", label_column="label")],
)
def _split_generators(self, dl_manager):
train_path = dl_manager.download_and_extract(_TRAIN_DOWNLOAD_URL)
test_path = dl_manager.download_and_extract(_TEST_DOWNLOAD_URL)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
]
def _generate_examples(self, filepath):
"""Generate AG News examples."""
with open(filepath, encoding="utf-8") as csv_file:
csv_reader = csv.DictReader(csv_file)
for id_, row in enumerate(csv_reader):
yield id_, row
```
### Motivation
Allow `datasets.DatasetInfo` to be generated based on the contents of the dataset.
### Your contribution
I'm willing to work on a PR with guidence. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6059/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6059/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1482/comments | https://api.github.com/repos/huggingface/datasets/issues/1482/events | https://github.com/huggingface/datasets/pull/1482 | 762,686,820 | MDExOlB1bGxSZXF1ZXN0NTM3MjA4NDk3 | 1,482 | Adding medical database chinese and english | {
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"events_url": "https://api.github.com/users/vrindaprabhu/events{/privacy}",
"followers_url": "https://api.github.com/users/vrindaprabhu/followers",
"following_url": "https://api.github.com/users/vrindaprabhu/following{/other_user}",
"gists_url": "https://api.github.com/users/vrindaprabhu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/vrindaprabhu",
"id": 16264631,
"login": "vrindaprabhu",
"node_id": "MDQ6VXNlcjE2MjY0NjMx",
"organizations_url": "https://api.github.com/users/vrindaprabhu/orgs",
"received_events_url": "https://api.github.com/users/vrindaprabhu/received_events",
"repos_url": "https://api.github.com/users/vrindaprabhu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/vrindaprabhu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vrindaprabhu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/vrindaprabhu"
} | [] | closed | false | null | [] | null | [
"Let me know it that helps !\r\nAlso feel free to ping me if you have other questions or if I can help you.",
"Now I am getting an Assertion Error!\r\n\r\n",
"All tests have passed. However, PyTest is ... | 2020-12-11T17:50:39Z | 2021-02-16T05:28:36Z | 2020-12-15T18:23:53Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1482",
"merged_at": "2020-12-15T18:23:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1482"
} | Error in creating dummy dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1482/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1257/comments | https://api.github.com/repos/huggingface/datasets/issues/1257/events | https://github.com/huggingface/datasets/pull/1257 | 758,550,490 | MDExOlB1bGxSZXF1ZXN0NTMzNzA2NDQy | 1,257 | Add Swahili news classification dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7923902?v=4",
"events_url": "https://api.github.com/users/yvonnegitau/events{/privacy}",
"followers_url": "https://api.github.com/users/yvonnegitau/followers",
"following_url": "https://api.github.com/users/yvonnegitau/following{/other_user}",
"gists_url": "https://api.github.com/users/yvonnegitau/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yvonnegitau",
"id": 7923902,
"login": "yvonnegitau",
"node_id": "MDQ6VXNlcjc5MjM5MDI=",
"organizations_url": "https://api.github.com/users/yvonnegitau/orgs",
"received_events_url": "https://api.github.com/users/yvonnegitau/received_events",
"repos_url": "https://api.github.com/users/yvonnegitau/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yvonnegitau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yvonnegitau/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yvonnegitau"
} | [] | closed | false | null | [] | null | [] | 2020-12-07T14:15:13Z | 2020-12-08T14:44:19Z | 2020-12-08T14:44:19Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1257",
"merged_at": "2020-12-08T14:44:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1257"
} | Add Swahili news classification dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1257/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3718/comments | https://api.github.com/repos/huggingface/datasets/issues/3718/events | https://github.com/huggingface/datasets/pull/3718 | 1,137,196,388 | PR_kwDODunzps4yx8r2 | 3,718 | Fix Evidence Infer Treatment dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-02-14T11:58:07Z | 2022-02-14T13:21:45Z | 2022-02-14T13:21:44Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3718",
"merged_at": "2022-02-14T13:21:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3718"
} | This PR:
- fixes a bug in the script, by removing an unnamed column with the row index: fix KeyError
- fix the metadata JSON, by adding both configurations (1.1 and 2.0): fix ExpectedMoreDownloadedFiles
- updates the dataset card
Fix #3515. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3718/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2324 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2324/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2324/comments | https://api.github.com/repos/huggingface/datasets/issues/2324/events | https://github.com/huggingface/datasets/pull/2324 | 876,602,064 | MDExOlB1bGxSZXF1ZXN0NjMwNzE1NTQz | 2,324 | Create Audio feature | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-30T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"id": 6968069,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"open_issues": 4,
"state": "open",
"title": "1.12",
"updated_at": "2021-10-13T10:26:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8"
} | [
"For optimal storage, it would be better to:\r\n- store only the audio file path in the cache Arrow file\r\n- perform decoding of the audio file (into audio array and sample rate) on the fly, while loading the dataset from cache (or by adding a convenient `load_audio` function)",
"Thanks a lot @lhoestq for your h... | 2021-05-05T15:55:22Z | 2021-10-13T10:26:33Z | 2021-10-13T10:26:33Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2324.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2324",
"merged_at": "2021-10-13T10:26:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2324.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2324"
} | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanced functionalities, we could eventually switch the library.
- I have implemented the audio feature as an extra: `pip install datasets[audio]`. For the moment, the typical datasets user uses only text datasets, and there is no need for them for additional package requirements for audio/image if they do not need them.
- For tests, I require audio dependencies (so that all audio functionalities are checked with our CI test suite); I exclude Linux platforms, which require an additional library to be installed with the distribution package manager
- I also require `pytest-datadir`, which allow to have (audio) data files for tests
- The audio data contain: array and sample_rate.
- The array is reshaped as 1D array (expected input for `Wav2Vec2`).
Note that to install `soundfile` on Linux, you need to install `libsndfile` using your distribution’s package manager, for example `sudo apt-get install libsndfile1`.
## Requirements Specification
- Access example with audio loading and resampling:
```python
ds[0]["audio"]
```
- Map with audio loading & resampling:
```python
def preprocess(batch):
batch["input_values"] = processor(batch["audio"]).input_values
return batch
ds = ds.map(preprocess)
```
- Map without audio loading and resampling:
```python
def preprocess(batch):
batch["labels"] = processor(batch["target_text"]).input_values
return batch
ds = ds.map(preprocess)
```
- Additional requirement specification (see https://github.com/huggingface/datasets/pull/2324#pullrequestreview-768864998): Cast audio column to change sampling sate:
```python
ds = ds.cast_column("audio", Audio(sampling_rate=16_000))
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 3,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2324/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2324/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5612/comments | https://api.github.com/repos/huggingface/datasets/issues/5612/events | https://github.com/huggingface/datasets/issues/5612 | 1,611,262,510 | I_kwDODunzps5gCeou | 5,612 | Arrow map type in parquet files unsupported | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_url": "https://api.github.com/users/TevenLeScao/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/TevenLeScao",
"id": 26709476,
"login": "TevenLeScao",
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"organizations_url": "https://api.github.com/users/TevenLeScao/orgs",
"received_events_url": "https://api.github.com/users/TevenLeScao/received_events",
"repos_url": "https://api.github.com/users/TevenLeScao/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/TevenLeScao/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TevenLeScao/subscriptions",
"type": "User",
"url": "https://api.github.com/users/TevenLeScao"
} | [] | open | false | null | [] | null | [
"I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.ma... | 2023-03-06T12:03:24Z | 2023-03-14T17:20:25Z | null | CONTRIBUTOR | null | null | null | ### Describe the bug
When I try to load parquet files that were processed with Spark, I get the following issue:
`ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.`
Strangely, loading the dataset with `streaming=True` solves the issue.
### Steps to reproduce the bug
The dataset is private, but this can be reproduced with any dataset that has Arrow maps.
### Expected behavior
Loading the dataset no matter whether streaming is True or not.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-5.15.0-1029-gcp-x86_64-with-glibc2.31
- Python version: 3.10.7
- PyArrow version: 8.0.0
- Pandas version: 1.4.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5612/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/4744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4744/comments | https://api.github.com/repos/huggingface/datasets/issues/4744/events | https://github.com/huggingface/datasets/issues/4744 | 1,317,822,345 | I_kwDODunzps5OjF-J | 4,744 | Remove instructions to generate dummy data from our docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stevhliu",
"id": 59462357,
"login": "stevhliu",
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stevhliu"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gi... | null | [
"Note that for me personally, conceptually all the dummy data (even for \"canonical\" datasets) should be superseded by `datasets-server`, which performs some kind of CI/CD of datasets (including the canonical ones)",
"I totally agree: next step should be rethinking if dummy data makes sense for canonical dataset... | 2022-07-26T07:32:58Z | 2022-08-02T23:50:30Z | 2022-08-02T23:50:30Z | MEMBER | null | null | null | In our docs, we indicate to generate the dummy data: https://huggingface.co/docs/datasets/dataset_script#testing-data-and-checksum-metadata
However:
- dummy data makes sense only for datasets in our GitHub repo: so that we can test their loading with our CI
- for datasets on the Hub:
- they do not pass any CI test requiring dummy data
- there are no instructions on how they can test their dataset locally using the dummy data
- the generation of the dummy data assumes our GitHub directory structure:
- the dummy data will be generated under `./datasets/<dataset_name>/dummy` even if locally there is no `./datasets` directory (which is the usual case). See issue:
- #4742
CC: @stevhliu | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4744/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3146/comments | https://api.github.com/repos/huggingface/datasets/issues/3146/events | https://github.com/huggingface/datasets/issues/3146 | 1,033,605,947 | I_kwDODunzps49m5M7 | 3,146 | CLI test command throws NonMatchingSplitsSizesError when saving infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2021-10-22T13:50:53Z | 2021-10-27T08:01:49Z | 2021-10-27T08:01:49Z | MEMBER | null | null | null | When trying to generate a datset JSON metadata, a `NonMatchingSplitsSizesError` is thrown:
```
$ datasets-cli test datasets/arabic_billion_words --save_infos --all_configs
Testing builder 'Alittihad' (1/10)
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: Unknown size, post-processed: Unknown size, total: 332.13 MiB) to .cache\arabic_billion_words\Alittihad\1.1.0\8175ff1c9714c6d5d15b1141b6042e5edf048276bb81a9c14e35e149a7a62ae4...
Traceback (most recent call last):
File "path\huggingface\datasets\.venv\Scripts\datasets-cli-script.py", line 33, in <module>
sys.exit(load_entry_point('datasets', 'console_scripts', 'datasets-cli')())
File "path\huggingface\datasets\src\datasets\commands\datasets_cli.py", line 33, in main
service.run()
File "path\huggingface\datasets\src\datasets\commands\test.py", line 144, in run
builder.download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 607, in download_and_prepare
self._download_and_prepare(
File "path\huggingface\datasets\src\datasets\builder.py", line 709, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "path\huggingface\datasets\src\datasets\utils\info_utils.py", line 74, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=0, num_examples=0, dataset_name='arabic_billion_words'), 'recorded': SplitInfo(name='train', num_bytes=1601790302, num_examples=349342, dataset_name='arabic_billion_words')}]
```
This is due because a previous run generated a wrong `dataset_info.json`.
This error can be avoided by passing `--ignore_verifications`, but I think this should be assumed when passing `--save_infos`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3146/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3146/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4091 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4091/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4091/comments | https://api.github.com/repos/huggingface/datasets/issues/4091/events | https://github.com/huggingface/datasets/issues/4091 | 1,192,023,855 | I_kwDODunzps5HDNcv | 4,091 | Build a Dataset One Example at a Time Without Loading All Data Into Memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/99340348?v=4",
"events_url": "https://api.github.com/users/aravind-tonita/events{/privacy}",
"followers_url": "https://api.github.com/users/aravind-tonita/followers",
"following_url": "https://api.github.com/users/aravind-tonita/following{/other_user}",
"gists_url": "https://api.github.com/users/aravind-tonita/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/aravind-tonita",
"id": 99340348,
"login": "aravind-tonita",
"node_id": "U_kgDOBevQPA",
"organizations_url": "https://api.github.com/users/aravind-tonita/orgs",
"received_events_url": "https://api.github.com/users/aravind-tonita/received_events",
"repos_url": "https://api.github.com/users/aravind-tonita/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/aravind-tonita/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aravind-tonita/subscriptions",
"type": "User",
"url": "https://api.github.com/users/aravind-tonita"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! Yes, the problem with `add_item` is that it keeps examples in memory, so you are left with these options:\r\n* writing a dataset loading script in which you iterate over `custom_example_dict_streamer` and yield the examples (in `_generate examples`)\r\n* storing the data in a JSON/CSV/Parquet/TXT file and usin... | 2022-04-04T16:19:24Z | 2022-04-20T14:31:00Z | 2022-04-20T14:31:00Z | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I have a very large dataset stored on disk in a custom format. I have some custom code that reads one data example at a time and yields it in the form of a dictionary. I want to construct a `Dataset` with all examples, and then save it to disk. I later want to load the saved `Dataset` and use it like any other HuggingFace dataset, get splits, wrap it in a PyTorch `DataLoader`, etc. **Crucially, I do not ever want to materialize all the data in memory while building the dataset.**
**Describe the solution you'd like**
I would like to be able to do something like the following. Notice how each example is read and then immediately added to the dataset. We do not store all the data in memory when constructing the `Dataset`. If it helps, I will know the schema of my dataset before hand.
```
# Initialize an empty Dataset, possibly from a known schema.
dataset = Dataset()
# Read in examples one by one using a custom data streamer.
for example_dict in custom_example_dict_streamer("/path/to/raw/data"):
# Add this example to the dict but do not store it in memory.
dataset.add_item(example_dict)
# Save the final dataset to disk as an Arrow-backed dataset.
dataset.save_to_disk("/path/to/dataset")
...
# I'd like to be able to later `load_from_disk` and use the loaded Dataset
# just like any other memory-mapped pyarrow-backed HuggingFace dataset...
loaded_dataset = Dataset.load_from_disk("/path/to/dataset")
loaded_dataset.set_format(type="torch", columnns=["foo", "bar", "baz"])
dataloader = torch.utils.data.DataLoader(loaded_dataset, batch_size=16)
...
```
**Describe alternatives you've considered**
I initially tried to read all the data into memory, construct a Pandas DataFrame and then call `Dataset.from_pandas`. This would not work as it requires storing all the data in memory. It seems that there is an `add_item` method already -- I tried to implement something like the desired API written above, but I've not been able to initialize an empty `Dataset` (this seems to require several layers of constructing `datasets.table.Table` which requires constructing a `pyarrow.lib.Table`, etc). I also considered writing my data to multiple sharded CSV files or JSON files and then using `from_csv` or `from_json`. I'd prefer not to do this because (1) I'd prefer to avoid the intermediate step of creating these temp CSV/JSON files and (2) I'm not sure if `from_csv` and `from_json` use memory-mapping.
Do you have any suggestions on how I'd be able to achieve this use case? Does something already exist to support this? Thank you very much in advance! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4091/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4091/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2176/comments | https://api.github.com/repos/huggingface/datasets/issues/2176/events | https://github.com/huggingface/datasets/issues/2176 | 851,865,795 | MDU6SXNzdWU4NTE4NjU3OTU= | 2,176 | Converting a Value to a ClassLabel | {
"avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4",
"events_url": "https://api.github.com/users/nelson-liu/events{/privacy}",
"followers_url": "https://api.github.com/users/nelson-liu/followers",
"following_url": "https://api.github.com/users/nelson-liu/following{/other_user}",
"gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nelson-liu",
"id": 7272031,
"login": "nelson-liu",
"node_id": "MDQ6VXNlcjcyNzIwMzE=",
"organizations_url": "https://api.github.com/users/nelson-liu/orgs",
"received_events_url": "https://api.github.com/users/nelson-liu/received_events",
"repos_url": "https://api.github.com/users/nelson-liu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nelson-liu"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class... | 2021-04-06T22:54:16Z | 2022-06-01T16:31:49Z | 2022-06-01T16:31:49Z | NONE | null | null | null | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2176/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3835/comments | https://api.github.com/repos/huggingface/datasets/issues/3835/events | https://github.com/huggingface/datasets/issues/3835 | 1,161,029,205 | I_kwDODunzps5FM-ZV | 3,835 | The link given on the gigaword does not work | {
"avatar_url": "https://avatars.githubusercontent.com/u/26357784?v=4",
"events_url": "https://api.github.com/users/martin6336/events{/privacy}",
"followers_url": "https://api.github.com/users/martin6336/followers",
"following_url": "https://api.github.com/users/martin6336/following{/other_user}",
"gists_url": "https://api.github.com/users/martin6336/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/martin6336",
"id": 26357784,
"login": "martin6336",
"node_id": "MDQ6VXNlcjI2MzU3Nzg0",
"organizations_url": "https://api.github.com/users/martin6336/orgs",
"received_events_url": "https://api.github.com/users/martin6336/received_events",
"repos_url": "https://api.github.com/users/martin6336/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/martin6336/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martin6336/subscriptions",
"type": "User",
"url": "https://api.github.com/users/martin6336"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-03-07T07:56:42Z | 2022-03-15T12:30:23Z | 2022-03-15T12:30:23Z | NONE | null | null | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3835/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3835/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5442/comments | https://api.github.com/repos/huggingface/datasets/issues/5442/events | https://github.com/huggingface/datasets/issues/5442 | 1,550,084,450 | I_kwDODunzps5cZGli | 5,442 | OneDrive Integrations with HF Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/59222637?v=4",
"events_url": "https://api.github.com/users/Mohammed20201991/events{/privacy}",
"followers_url": "https://api.github.com/users/Mohammed20201991/followers",
"following_url": "https://api.github.com/users/Mohammed20201991/following{/other_user}",
"gists_url": "https://api.github.com/users/Mohammed20201991/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Mohammed20201991",
"id": 59222637,
"login": "Mohammed20201991",
"node_id": "MDQ6VXNlcjU5MjIyNjM3",
"organizations_url": "https://api.github.com/users/Mohammed20201991/orgs",
"received_events_url": "https://api.github.com/users/Mohammed20201991/received_events",
"repos_url": "https://api.github.com/users/Mohammed20201991/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Mohammed20201991/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mohammed20201991/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Mohammed20201991"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | [
"Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://githu... | 2023-01-19T23:12:08Z | 2023-02-24T16:17:51Z | 2023-02-24T16:17:51Z | NONE | null | null | null | ### Feature request
First of all , I would like to thank all community who are developed DataSet storage and make it free available
How to integrate our Onedrive account or any other possible storage clouds (like google drive,...) with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive** account and I want to move between drive and Hugging face repo or vis versa
### Motivation
make the dataset section more flexible with other possible storage
like the integration between Google Collab and Google drive the storage
### Your contribution
Can be done using Hugging face CLI | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5442/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5257/comments | https://api.github.com/repos/huggingface/datasets/issues/5257/events | https://github.com/huggingface/datasets/pull/5257 | 1,452,656,891 | PR_kwDODunzps5DFENm | 5,257 | remove an unused statement | {
"avatar_url": "https://avatars.githubusercontent.com/u/7569098?v=4",
"events_url": "https://api.github.com/users/WrRan/events{/privacy}",
"followers_url": "https://api.github.com/users/WrRan/followers",
"following_url": "https://api.github.com/users/WrRan/following{/other_user}",
"gists_url": "https://api.github.com/users/WrRan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WrRan",
"id": 7569098,
"login": "WrRan",
"node_id": "MDQ6VXNlcjc1NjkwOTg=",
"organizations_url": "https://api.github.com/users/WrRan/orgs",
"received_events_url": "https://api.github.com/users/WrRan/received_events",
"repos_url": "https://api.github.com/users/WrRan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WrRan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WrRan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WrRan"
} | [] | closed | false | null | [] | null | [] | 2022-11-17T04:00:50Z | 2022-11-18T11:04:08Z | 2022-11-18T11:04:08Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5257.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5257",
"merged_at": "2022-11-18T11:04:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5257.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5257"
} | remove the unused statement: `input_pairs = list(zip())` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5257/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5257/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2030/comments | https://api.github.com/repos/huggingface/datasets/issues/2030/events | https://github.com/huggingface/datasets/pull/2030 | 829,110,803 | MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4 | 2,030 | Implement Dataset from text | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..."
] | 2021-03-11T12:34:50Z | 2021-03-18T13:29:29Z | 2021-03-18T13:29:29Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2030",
"merged_at": "2021-03-18T13:29:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2030"
} | Implement `Dataset.from_text`.
Analogue to #1943, #1946. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2030/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1761/comments | https://api.github.com/repos/huggingface/datasets/issues/1761/events | https://github.com/huggingface/datasets/pull/1761 | 791,150,858 | MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw | 1,761 | Add SILICONE benchmark | {
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eusip",
"id": 1551356,
"login": "eusip",
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"repos_url": "https://api.github.com/users/eusip/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eusip"
} | [] | closed | false | null | [] | null | [
"Thanks for the feedback. All your comments have been addressed!",
"Thank you for your constructive feedback! I now know how to best format future datasets that our team plans to publish in the near future :)",
"Awesome ! Looking forward to it :) ",
"Hi @lhoestq ! One last question. Our research team would li... | 2021-01-21T14:29:12Z | 2021-02-04T14:32:48Z | 2021-01-26T13:50:31Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"merged_at": "2021-01-26T13:50:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1761"
} | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1761/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3642/comments | https://api.github.com/repos/huggingface/datasets/issues/3642/events | https://github.com/huggingface/datasets/pull/3642 | 1,116,306,986 | PR_kwDODunzps4xrj2S | 3,642 | Fix dataset slicing with negative bounds when indices mapping is not `None` | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2022-01-27T14:45:53Z | 2022-01-27T18:16:23Z | 2022-01-27T18:16:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3642.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3642",
"merged_at": "2022-01-27T18:16:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3642.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3642"
} | Fix #3611 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3642/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3642/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2894/comments | https://api.github.com/repos/huggingface/datasets/issues/2894/events | https://github.com/huggingface/datasets/pull/2894 | 993,375,654 | MDExOlB1bGxSZXF1ZXN0NzMxNTcxODc5 | 2,894 | Fix COUNTER dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-09-10T16:07:29Z | 2021-09-10T16:27:45Z | 2021-09-10T16:27:44Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2894",
"merged_at": "2021-09-10T16:27:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2894"
} | Fix filename generating `FileNotFoundError`.
Related to #2866.
CC: @severo. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2894/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5161/comments | https://api.github.com/repos/huggingface/datasets/issues/5161/events | https://github.com/huggingface/datasets/issues/5161 | 1,422,371,748 | I_kwDODunzps5Ux6uk | 5,161 | Dataset can’t cache model’s outputs | {
"avatar_url": "https://avatars.githubusercontent.com/u/37979232?v=4",
"events_url": "https://api.github.com/users/jongjyh/events{/privacy}",
"followers_url": "https://api.github.com/users/jongjyh/followers",
"following_url": "https://api.github.com/users/jongjyh/following{/other_user}",
"gists_url": "https://api.github.com/users/jongjyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jongjyh",
"id": 37979232,
"login": "jongjyh",
"node_id": "MDQ6VXNlcjM3OTc5MjMy",
"organizations_url": "https://api.github.com/users/jongjyh/orgs",
"received_events_url": "https://api.github.com/users/jongjyh/received_events",
"repos_url": "https://api.github.com/users/jongjyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jongjyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jongjyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jongjyh"
} | [] | closed | false | null | [] | null | [
"Addressed in https://github.com/huggingface/datasets/pull/5191 (torch.Tensor objects now produce deterministic hashes)"
] | 2022-10-25T12:19:00Z | 2022-11-03T16:12:52Z | 2022-11-03T16:12:51Z | NONE | null | null | null | ### Describe the bug
Hi,
I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this?
### Steps to reproduce the bug
1. run below code
2. get different hash
```
from transformers import BertModel
from transformers import AutoTokenizer
import torch
token = ['hello']
model = BertModel.from_pretrained("bert-base-uncased").eval()
tok = AutoTokenizer.from_pretrained("bert-base-uncased")
def abcd():
with torch.no_grad():
out = model(**tok(token,return_tensors='pt'))[0]
# out = tok(token)
return out
from datasets.fingerprint import Hasher
my_func = abcd
print(Hasher.hash(my_func))
print(abcd())
```
### Expected behavior
I wanna cache all the model output
### Environment info
datasets:2.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5161/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5852/comments | https://api.github.com/repos/huggingface/datasets/issues/5852/events | https://github.com/huggingface/datasets/pull/5852 | 1,707,927,165 | PR_kwDODunzps5QZ1lj | 5,852 | Iterable torch formatting | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-05-12T16:48:49Z | 2023-06-13T16:04:05Z | 2023-06-13T15:57:05Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5852.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5852",
"merged_at": "2023-06-13T15:57:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5852.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5852"
} | Used the TorchFormatter to get torch tensors in iterable dataset with format set to "torch".
It uses the data from Arrow if possible, otherwise applies recursive_tensorize.
When set back to format_type=None, cast_to_python_objects is used.
requires https://github.com/huggingface/datasets/pull/5821
close https://github.com/huggingface/datasets/issues/5793 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5852/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3656/comments | https://api.github.com/repos/huggingface/datasets/issues/3656/events | https://github.com/huggingface/datasets/issues/3656 | 1,120,510,823 | I_kwDODunzps5CyaNn | 3,656 | checksum error subjqa dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/9828683?v=4",
"events_url": "https://api.github.com/users/RensDimmendaal/events{/privacy}",
"followers_url": "https://api.github.com/users/RensDimmendaal/followers",
"following_url": "https://api.github.com/users/RensDimmendaal/following{/other_user}",
"gists_url": "https://api.github.com/users/RensDimmendaal/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/RensDimmendaal",
"id": 9828683,
"login": "RensDimmendaal",
"node_id": "MDQ6VXNlcjk4Mjg2ODM=",
"organizations_url": "https://api.github.com/users/RensDimmendaal/orgs",
"received_events_url": "https://api.github.com/users/RensDimmendaal/received_events",
"repos_url": "https://api.github.com/users/RensDimmendaal/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/RensDimmendaal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/RensDimmendaal/subscriptions",
"type": "User",
"url": "https://api.github.com/users/RensDimmendaal"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @RensDimmendaal, \r\n\r\nI'm sorry but I can't reproduce your bug:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset(\"subjqa\", \"electronics\")\r\nDownloading builder script: 9.15kB [00:00, 4.10MB/s] ... | 2022-02-01T10:53:33Z | 2022-02-10T10:56:59Z | 2022-02-10T10:56:38Z | NONE | null | null | null | ## Describe the bug
I get a checksum error when loading the `subjqa` dataset (used in the transformers book).
## Steps to reproduce the bug
```python
from datasets import load_dataset
subjqa = load_dataset("subjqa","electronics")
```
## Expected results
Loading the dataset
## Actual results
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-2-d2857d460155> in <module>()
2 from datasets import load_dataset
3
----> 4 subjqa = load_dataset("subjqa","electronics")
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
38 if len(bad_urls) > 0:
39 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 40 raise NonMatchingChecksumError(error_msg + str(bad_urls))
41 logger.info("All the checksums matched successfully" + for_verification_name)
42
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/lewtun/SubjQA/archive/refs/heads/master.zip']
```
## Environment info
Google colab
- `datasets` version: 1.18.2
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3656/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3656/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/1670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1670/comments | https://api.github.com/repos/huggingface/datasets/issues/1670/events | https://github.com/huggingface/datasets/issues/1670 | 776,608,579 | MDU6SXNzdWU3NzY2MDg1Nzk= | 1,670 | wiki_dpr pre-processing performance | {
"avatar_url": "https://avatars.githubusercontent.com/u/753898?v=4",
"events_url": "https://api.github.com/users/dbarnhart/events{/privacy}",
"followers_url": "https://api.github.com/users/dbarnhart/followers",
"following_url": "https://api.github.com/users/dbarnhart/following{/other_user}",
"gists_url": "https://api.github.com/users/dbarnhart/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dbarnhart",
"id": 753898,
"login": "dbarnhart",
"node_id": "MDQ6VXNlcjc1Mzg5OA==",
"organizations_url": "https://api.github.com/users/dbarnhart/orgs",
"received_events_url": "https://api.github.com/users/dbarnhart/received_events",
"repos_url": "https://api.github.com/users/dbarnhart/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dbarnhart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dbarnhart/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dbarnhart"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "72f99f",
"default": fals... | open | false | null | [] | null | [
"Hi ! And thanks for the tips :) \r\n\r\nIndeed currently `wiki_dpr` takes some time to be processed.\r\nMultiprocessing for dataset generation is definitely going to speed up things.\r\n\r\nRegarding the index note that for the default configurations, the index is downloaded instead of being built, which avoid spe... | 2020-12-30T19:41:43Z | 2021-01-28T09:41:36Z | null | NONE | null | null | null | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multiprocessing as they are addressed in other issues (#786), but this is the first obvious thing to do. Using cython to speed up the text manipulation may be also help. Loading and processing a dataset of this size in under 15 minutes does not seem unreasonable on a modern multi-core machine. I have hit such targets myself on similar tasks. Would love to see this improve.
The other issue is that it takes 3h to construct the FAISS index. If only we could use GPUs with HNSW, but we can't. My sharded GPU indexing code can build an IVF + PQ index in 10 minutes on 20 million vectors. Still, 3h seems slow even for the CPU.
It looks like HF is adding only 1000 vectors at a time by default [2], whereas the faiss benchmarks adds 1 million vectors at a time (effectively) [3]. It's possible the runtime could be reduced with a larger batch. Also, it looks like project dependencies ultimately use OpenBLAS, but this is known to have issues when combined with OpenMP, which HNSW does [3]. A workaround is to set the environment variable `OMP_WAIT_POLICY=PASSIVE` via `os.environ` or similar.
References:
[1] https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py
[2] https://github.com/huggingface/datasets/blob/master/src/datasets/search.py
[3] https://github.com/facebookresearch/faiss/blob/master/benchs/bench_hnsw.py
[4] https://github.com/facebookresearch/faiss/issues/422 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1670/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1670/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1683/comments | https://api.github.com/repos/huggingface/datasets/issues/1683/events | https://github.com/huggingface/datasets/issues/1683 | 778,287,612 | MDU6SXNzdWU3NzgyODc2MTI= | 1,683 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abarbosa94",
"id": 6608232,
"login": "abarbosa94",
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"organizations_url": "https://api.github.com/users/abarbosa94/orgs",
"received_events_url": "https://api.github.com/users/abarbosa94/received_events",
"repos_url": "https://api.github.com/users/abarbosa94/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abarbosa94"
} | [] | closed | false | null | [] | null | [
"Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]... | 2021-01-04T18:47:53Z | 2021-01-04T19:04:45Z | 2021-01-04T19:04:45Z | CONTRIBUTOR | null | null | null | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRContextEncoder.from_pretrained("facebook/dpr-ctx_encoder-single-nq-base", cache_dir="../datasets/")
ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(
"facebook/dpr-ctx_encoder-single-nq-base",
cache_dir="..datasets/"
)
dataset = load_dataset('text',
data_files='data/raw/ARC_Corpus.txt',
cache_dir='../datasets')
torch.set_grad_enabled(False)
ds_with_embeddings = dataset.map(
lambda example: {
'embeddings': ctx_encoder(
**ctx_tokenizer(
example["text"],
padding='max_length',
truncation=True,
max_length=MAX_SEQ_LENGTH,
return_tensors="pt"
)
)[0][0].numpy(),
},
batched=True,
load_from_cache_file=False,
batch_size=1000
)
```
ARC Corpus can be obtained from [here](https://ai2-datasets.s3-us-west-2.amazonaws.com/arc/ARC-V1-Feb2018.zip)
And then the error:
```
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-67d139bb2ed3> in <module>
14 batched=True,
15 load_from_cache_file=False,
---> 16 batch_size=1000
17 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
301 num_proc=num_proc,
302 )
--> 303 for k, dataset in self.items()
304 }
305 )
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1257 fn_kwargs=fn_kwargs,
1258 new_fingerprint=new_fingerprint,
-> 1259 update_data=update_data,
1260 )
1261 else:
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
155 }
156 # apply actual function
--> 157 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
158 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
159 # re-apply format to the output
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
161 # Call actual function
162
--> 163 out = func(self, *args, **kwargs)
164
165 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1526 if update_data:
1527 batch = cast_to_python_objects(batch)
-> 1528 writer.write_batch(batch)
1529 if update_data:
1530 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)
276 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)
277 typed_sequence_examples[col] = typed_sequence
--> 278 pa_table = pa.Table.from_pydict(typed_sequence_examples)
279 self.write_table(pa_table)
280
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_arrays()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.validate()
~/.cache/pypoetry/virtualenvs/masters-utTTC0p8-py3.7/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Column 1 named text expected length 768 but got length 1000
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1683/timeline | null | completed | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.