url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 2.35B | node_id stringlengths 18 32 | number int64 1 6.97k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | milestone dict | comments int64 0 70 | created_at timestamp[us, tz=UTC] | updated_at timestamp[us, tz=UTC] | closed_at timestamp[us, tz=UTC] | author_association stringclasses 4
values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app float64 | state_reason stringclasses 3
values | existe_pull_request bool 2
classes | comentarios listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/637/comments | https://api.github.com/repos/huggingface/datasets/issues/637/events | https://github.com/huggingface/datasets/pull/637 | 703,539,909 | MDExOlB1bGxSZXF1ZXN0NDg4NjMwNzk4 | 637 | Add MATINF | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 0 | 2020-09-17T12:24:53Z | 2020-09-17T13:23:18Z | 2020-09-17T13:23:17Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/637.diff",
"html_url": "https://github.com/huggingface/datasets/pull/637",
"merged_at": "2020-09-17T13:23:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/637.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/637... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/637/timeline | null | null | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/636/comments | https://api.github.com/repos/huggingface/datasets/issues/636/events | https://github.com/huggingface/datasets/pull/636 | 702,883,989 | MDExOlB1bGxSZXF1ZXN0NDg4MDg3OTA5 | 636 | Consistent ner features | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-16T15:56:25Z | 2020-09-17T09:52:59Z | 2020-09-17T09:52:58Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/636.diff",
"html_url": "https://github.com/huggingface/datasets/pull/636",
"merged_at": "2020-09-17T09:52:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/636.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/636... | As discussed in #613 , this PR aims at making NER feature names consistent across datasets.
I changed the feature names of LinCE and XTREME/PAN-X | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/636/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/636/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/635/comments | https://api.github.com/repos/huggingface/datasets/issues/635/events | https://github.com/huggingface/datasets/pull/635 | 702,822,439 | MDExOlB1bGxSZXF1ZXN0NDg4MDM2OTE5 | 635 | Loglevel | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 2 | 2020-09-16T14:37:53Z | 2020-09-17T09:52:19Z | 2020-09-17T09:52:18Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/635.diff",
"html_url": "https://github.com/huggingface/datasets/pull/635",
"merged_at": "2020-09-17T09:52:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/635.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/635... | Continuation of #618 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/635/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/635/timeline | null | null | true | [
"I think it's ready now @stas00, did you want to add something else ?\r\nThis PR includes your changes but with the level set to warning",
"LGTM, thank you, @lhoestq "
] |
https://api.github.com/repos/huggingface/datasets/issues/634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/634/comments | https://api.github.com/repos/huggingface/datasets/issues/634/events | https://github.com/huggingface/datasets/pull/634 | 702,676,041 | MDExOlB1bGxSZXF1ZXN0NDg3OTEzOTk4 | 634 | Add ConLL-2000 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | 0 | 2020-09-16T11:14:11Z | 2020-09-17T10:38:10Z | 2020-09-17T10:38:10Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/634.diff",
"html_url": "https://github.com/huggingface/datasets/pull/634",
"merged_at": "2020-09-17T10:38:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/634.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/634... | Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/634/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/633/comments | https://api.github.com/repos/huggingface/datasets/issues/633/events | https://github.com/huggingface/datasets/issues/633 | 702,440,484 | MDU6SXNzdWU3MDI0NDA0ODQ= | 633 | Load large text file for LM pre-training resulting in OOM | {
"avatar_url": "https://avatars.githubusercontent.com/u/29704017?v=4",
"events_url": "https://api.github.com/users/leethu2012/events{/privacy}",
"followers_url": "https://api.github.com/users/leethu2012/followers",
"following_url": "https://api.github.com/users/leethu2012/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | null | 27 | 2020-09-16T04:33:15Z | 2021-02-16T12:02:01Z | null | NONE | null | null | null | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/633/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/633/timeline | null | null | false | [
"Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?",
"There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.",
"@lhoestq @sgugger Thanks for your comments. I have install from source ... |
https://api.github.com/repos/huggingface/datasets/issues/632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/632/comments | https://api.github.com/repos/huggingface/datasets/issues/632/events | https://github.com/huggingface/datasets/pull/632 | 702,358,124 | MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2 | 632 | Fix typos in the loading datasets docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url"... | [] | closed | false | null | [] | null | 1 | 2020-09-16T00:27:41Z | 2020-09-21T16:31:11Z | 2020-09-16T06:52:44Z | COLLABORATOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/632.diff",
"html_url": "https://github.com/huggingface/datasets/pull/632",
"merged_at": "2020-09-16T06:52:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/632.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/632... | This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/632/timeline | null | null | true | [
"thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/631/comments | https://api.github.com/repos/huggingface/datasets/issues/631/events | https://github.com/huggingface/datasets/pull/631 | 701,711,255 | MDExOlB1bGxSZXF1ZXN0NDg3MTE3OTA0 | 631 | Fix text delimiter | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 5 | 2020-09-15T08:08:42Z | 2020-09-22T15:03:06Z | 2020-09-15T08:26:25Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/631",
"merged_at": "2020-09-15T08:26:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/631... | I changed the delimiter in the `text` dataset script.
It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622
I changed the delimiter to an unused ascii character that is not present in text files : `\b` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/631/timeline | null | null | true | [
"Which OS are you using ?@abhi1nandy2",
"> Which OS are you using ?\r\n\r\nPRETTY_NAME=\"Debian GNU/Linux 9 (stretch)\"\r\nNAME=\"Debian GNU/Linux\"\r\nVERSION_ID=\"9\"\r\nVERSION=\"9 (stretch)\"\r\nVERSION_CODENAME=stretch\r\nID=debian\r\nHOME_URL=\"https://www.debian.org/\"\r\nSUPPORT_URL=\"https://www.debian.o... |
https://api.github.com/repos/huggingface/datasets/issues/630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/630/comments | https://api.github.com/repos/huggingface/datasets/issues/630/events | https://github.com/huggingface/datasets/issues/630 | 701,636,350 | MDU6SXNzdWU3MDE2MzYzNTA= | 630 | Text dataset not working with large files | {
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | 11 | 2020-09-15T06:02:36Z | 2020-09-25T22:21:43Z | 2020-09-25T22:21:43Z | NONE | null | null | null | ```
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 333, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 262, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/630/timeline | null | completed | false | [
"Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.",
"Can you give us some stats on the data files you use as inputs?",
"Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\\n```\r\n\r\nAlso, it gets... |
https://api.github.com/repos/huggingface/datasets/issues/629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/629/comments | https://api.github.com/repos/huggingface/datasets/issues/629/events | https://github.com/huggingface/datasets/issues/629 | 701,517,550 | MDU6SXNzdWU3MDE1MTc1NTA= | 629 | straddling object straddles two block boundaries | {
"avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4",
"events_url": "https://api.github.com/users/bharaniabhishek123/events{/privacy}",
"followers_url": "https://api.github.com/users/bharaniabhishek123/followers",
"following_url": "https://api.github.com/users/bharaniabhishek123/following{/oth... | [] | closed | false | null | [] | null | 1 | 2020-09-15T00:30:46Z | 2020-09-15T00:36:17Z | 2020-09-15T00:32:17Z | NONE | null | null | null | I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below :
I tried calling read_json with readOptions but no luck .
```
table = json.read_json(fn)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyarrow/_json.pyx", li... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/629/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/629/timeline | null | completed | false | [
"sorry it's an apache arrow issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/628/comments | https://api.github.com/repos/huggingface/datasets/issues/628/events | https://github.com/huggingface/datasets/pull/628 | 701,496,053 | MDExOlB1bGxSZXF1ZXN0NDg2OTQyNzgx | 628 | Update docs links in the contribution guideline | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | 1 | 2020-09-14T23:27:19Z | 2020-11-02T21:03:23Z | 2020-09-15T06:19:35Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/628",
"merged_at": "2020-09-15T06:19:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/628... | Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/628/timeline | null | null | true | [
"Thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/627/comments | https://api.github.com/repos/huggingface/datasets/issues/627/events | https://github.com/huggingface/datasets/pull/627 | 701,411,661 | MDExOlB1bGxSZXF1ZXN0NDg2ODcxMTg2 | 627 | fix (#619) MLQA features names | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | 0 | 2020-09-14T20:41:59Z | 2020-11-02T21:04:32Z | 2020-09-16T06:54:11Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/627",
"merged_at": "2020-09-16T06:54:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/627... | Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/627/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/626/comments | https://api.github.com/repos/huggingface/datasets/issues/626/events | https://github.com/huggingface/datasets/pull/626 | 701,352,605 | MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1 | 626 | Update GLUE URLs (now hosted on FB) | {
"avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4",
"events_url": "https://api.github.com/users/jeswan/events{/privacy}",
"followers_url": "https://api.github.com/users/jeswan/followers",
"following_url": "https://api.github.com/users/jeswan/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | 0 | 2020-09-14T19:05:39Z | 2020-09-16T06:53:18Z | 2020-09-16T06:53:18Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/626.diff",
"html_url": "https://github.com/huggingface/datasets/pull/626",
"merged_at": "2020-09-16T06:53:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/626.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/626... | NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
Note: rebased on huggingface/dat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/626/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/626/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/625/comments | https://api.github.com/repos/huggingface/datasets/issues/625/events | https://github.com/huggingface/datasets/issues/625 | 701,057,799 | MDU6SXNzdWU3MDEwNTc3OTk= | 625 | dtype of tensors should be preserved | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | 9 | 2020-09-14T12:38:05Z | 2021-08-17T08:30:04Z | 2021-08-17T08:30:04Z | CONTRIBUTOR | null | null | null | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/625/timeline | null | completed | false | [
"Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd t... |
https://api.github.com/repos/huggingface/datasets/issues/624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/624/comments | https://api.github.com/repos/huggingface/datasets/issues/624/events | https://github.com/huggingface/datasets/issues/624 | 700,541,628 | MDU6SXNzdWU3MDA1NDE2Mjg= | 624 | Add learningq dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4",
"events_url": "https://api.github.com/users/krrishdholakia/events{/privacy}",
"followers_url": "https://api.github.com/users/krrishdholakia/followers",
"following_url": "https://api.github.com/users/krrishdholakia/following{/other_user}",
... | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | [] | null | 0 | 2020-09-13T10:20:27Z | 2020-09-14T09:50:02Z | null | NONE | null | null | null | Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/624/timeline | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/623/comments | https://api.github.com/repos/huggingface/datasets/issues/623/events | https://github.com/huggingface/datasets/issues/623 | 700,235,308 | MDU6SXNzdWU3MDAyMzUzMDg= | 623 | Custom feature types in `load_dataset` from CSV | {
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"events_url": "https://api.github.com/users/lvwerra/events{/privacy}",
"followers_url": "https://api.github.com/users/lvwerra/followers",
"following_url": "https://api.github.com/users/lvwerra/following{/other_user}",
"gists_url": "https:/... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | null | 7 | 2020-09-12T13:21:34Z | 2020-09-30T19:51:43Z | 2020-09-30T08:39:54Z | MEMBER | null | null | null | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/623/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/623/timeline | null | completed | false | [
"Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label... |
https://api.github.com/repos/huggingface/datasets/issues/622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/622/comments | https://api.github.com/repos/huggingface/datasets/issues/622/events | https://github.com/huggingface/datasets/issues/622 | 700,225,826 | MDU6SXNzdWU3MDAyMjU4MjY= | 622 | load_dataset for text files not working | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 41 | 2020-09-12T12:49:28Z | 2020-10-28T11:07:31Z | 2020-10-28T11:07:30Z | CONTRIBUTOR | null | null | null | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/622/timeline | null | completed | false | [
"Can you give us more information on your os and pip environments (pip list)?",
"@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2... |
https://api.github.com/repos/huggingface/datasets/issues/621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/621/comments | https://api.github.com/repos/huggingface/datasets/issues/621/events | https://github.com/huggingface/datasets/pull/621 | 700,171,097 | MDExOlB1bGxSZXF1ZXN0NDg1ODQ3ODYz | 621 | [docs] Index: The native emoji looks kinda ugly in large size | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | 0 | 2020-09-12T09:48:40Z | 2020-09-15T06:20:03Z | 2020-09-15T06:20:02Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/621.diff",
"html_url": "https://github.com/huggingface/datasets/pull/621",
"merged_at": "2020-09-15T06:20:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/621.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/621... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/621/timeline | null | null | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/620 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/620/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/620/comments | https://api.github.com/repos/huggingface/datasets/issues/620/events | https://github.com/huggingface/datasets/issues/620 | 699,815,135 | MDU6SXNzdWU2OTk4MTUxMzU= | 620 | map/filter multiprocessing raises errors and corrupts datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4",
"events_url": "https://api.github.com/users/timothyjlaurent/events{/privacy}",
"followers_url": "https://api.github.com/users/timothyjlaurent/followers",
"following_url": "https://api.github.com/users/timothyjlaurent/following{/other_user}",... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 22 | 2020-09-11T22:30:06Z | 2020-10-08T16:31:47Z | 2020-10-08T16:31:46Z | NONE | null | null | null | After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing.
```python
...
ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed)
ner_ds_dict["validation"] = ner_ds_dict["test"]
rel_ds_dict = rel_ds.train_test_split(test_si... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/620/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/620/timeline | null | completed | false | [
"It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = col... |
https://api.github.com/repos/huggingface/datasets/issues/619 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/619/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/619/comments | https://api.github.com/repos/huggingface/datasets/issues/619/events | https://github.com/huggingface/datasets/issues/619 | 699,733,612 | MDU6SXNzdWU2OTk3MzM2MTI= | 619 | Mistakes in MLQA features names | {
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | null | [] | null | 1 | 2020-09-11T20:46:23Z | 2020-09-16T06:59:19Z | 2020-09-16T06:59:19Z | CONTRIBUTOR | null | null | null | I think the following features in MLQA shouldn't be named the way they are:
1. `questions` (should be `question`)
2. `ids` (should be `id`)
3. `start` (should be `answer_start`)
The reasons I'm suggesting these features be renamed are:
* To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/619/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/619/timeline | null | completed | false | [
"Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/618/comments | https://api.github.com/repos/huggingface/datasets/issues/618/events | https://github.com/huggingface/datasets/pull/618 | 699,684,831 | MDExOlB1bGxSZXF1ZXN0NDg1NDAxMzI5 | 618 | sync logging utils with transformers | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | 12 | 2020-09-11T19:46:13Z | 2020-09-17T15:40:59Z | 2020-09-17T09:53:47Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/618.diff",
"html_url": "https://github.com/huggingface/datasets/pull/618",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/618.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/618"
} | sync the docs/code with the recent changes in transformers' `logging` utils:
1. change the default level to `WARNING`
2. add `DATASETS_VERBOSITY` env var
3. expand docs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/618/timeline | null | null | true | [
"Also, some downloads and dataset processing can be quite long for large datasets like wikipedia/pg19/etc. We probably don't want to user to think that the library is hanging. Happy to reorganize logging between DEBUG/INFO/WARNING to make it less verbose by default though.",
"The problem is that `transformers` im... |
https://api.github.com/repos/huggingface/datasets/issues/617 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/617/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/617/comments | https://api.github.com/repos/huggingface/datasets/issues/617/events | https://github.com/huggingface/datasets/issues/617 | 699,472,596 | MDU6SXNzdWU2OTk0NzI1OTY= | 617 | Compare different Rouge implementations | {
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 7 | 2020-09-11T15:49:32Z | 2023-03-22T12:08:44Z | 2020-10-02T09:52:18Z | NONE | null | null | null | I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example.
Ca... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/617/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/617/timeline | null | completed | false | [
"Updates - the differences between the following three\r\n(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)\r\n(2) https://github.com/google-research/google-research/tree/master/rouge\r\n(3) https://github.com/pltrdy/files2rouge (used in fairseq)\r\ncan be explained by two t... |
https://api.github.com/repos/huggingface/datasets/issues/616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/616/comments | https://api.github.com/repos/huggingface/datasets/issues/616/events | https://github.com/huggingface/datasets/issues/616 | 699,462,293 | MDU6SXNzdWU2OTk0NjIyOTM= | 616 | UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [] | open | false | null | [] | null | 14 | 2020-09-11T15:39:16Z | 2021-07-22T21:12:21Z | null | CONTRIBUTOR | null | null | null | I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 4,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/616/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/616/timeline | null | null | false | [
"I have the same issue",
"Same issue here when Trying to load a dataset from disk.",
"I am also experiencing this issue, and don't know if it's affecting my training.",
"Same here. I hope the dataset is not being modified in-place.",
"I think the only way to avoid this warning would be to do a copy of the n... |
https://api.github.com/repos/huggingface/datasets/issues/615 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/615/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/615/comments | https://api.github.com/repos/huggingface/datasets/issues/615/events | https://github.com/huggingface/datasets/issues/615 | 699,410,773 | MDU6SXNzdWU2OTk0MTA3NzM= | 615 | Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 16 | 2020-09-11T14:50:38Z | 2024-05-02T06:53:15Z | 2020-09-19T16:46:31Z | MEMBER | null | null | null | How to reproduce:
```python
from datasets import load_dataset
wiki = load_dataset("wikipedia", "20200501.en", split="train")
wiki[[0]]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
<ipython-input-13-38... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/615/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/615/timeline | null | completed | false | [
"Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_in... |
https://api.github.com/repos/huggingface/datasets/issues/614 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/614/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/614/comments | https://api.github.com/repos/huggingface/datasets/issues/614/events | https://github.com/huggingface/datasets/pull/614 | 699,177,110 | MDExOlB1bGxSZXF1ZXN0NDg0OTQ2MzA1 | 614 | [doc] Update deploy.sh | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-09-11T11:06:13Z | 2020-09-14T08:49:19Z | 2020-09-14T08:49:17Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/614.diff",
"html_url": "https://github.com/huggingface/datasets/pull/614",
"merged_at": "2020-09-14T08:49:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/614.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/614... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/614/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/614/timeline | null | null | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/613/comments | https://api.github.com/repos/huggingface/datasets/issues/613/events | https://github.com/huggingface/datasets/pull/613 | 699,117,070 | MDExOlB1bGxSZXF1ZXN0NDg0ODkyMTUx | 613 | Add CoNLL-2003 shared task dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"events_url": "https://api.github.com/users/vblagoje/events{/privacy}",
"followers_url": "https://api.github.com/users/vblagoje/followers",
"following_url": "https://api.github.com/users/vblagoje/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | 7 | 2020-09-11T10:02:30Z | 2020-10-05T10:43:05Z | 2020-09-17T10:36:38Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/613.diff",
"html_url": "https://github.com/huggingface/datasets/pull/613",
"merged_at": "2020-09-17T10:36:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/613.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/613... | Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also fo... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/613/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/613/timeline | null | null | true | [
"I think we should somewhere mention, that is the dataset in IOB2 tagging scheme, whereas the original dataset uses IOB1 :)",
"Indeed this is something we want to mention.\r\n\r\nIf would want to add more details about the IOB1->2 change, feel free to ignore my suggestions and edit the description + update the da... |
https://api.github.com/repos/huggingface/datasets/issues/612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/612/comments | https://api.github.com/repos/huggingface/datasets/issues/612/events | https://github.com/huggingface/datasets/pull/612 | 699,008,644 | MDExOlB1bGxSZXF1ZXN0NDg0Nzk2Mjg5 | 612 | add multi-proc to dataset dict | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-09-11T08:18:13Z | 2020-09-11T10:20:13Z | 2020-09-11T10:20:11Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/612.diff",
"html_url": "https://github.com/huggingface/datasets/pull/612",
"merged_at": "2020-09-11T10:20:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/612.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/612... | Add multi-proc to `DatasetDict` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/612/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/611/comments | https://api.github.com/repos/huggingface/datasets/issues/611/events | https://github.com/huggingface/datasets/issues/611 | 698,863,988 | MDU6SXNzdWU2OTg4NjM5ODg= | 611 | ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648 | {
"avatar_url": "https://avatars.githubusercontent.com/u/32364921?v=4",
"events_url": "https://api.github.com/users/sangyx/events{/privacy}",
"followers_url": "https://api.github.com/users/sangyx/followers",
"following_url": "https://api.github.com/users/sangyx/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | 6 | 2020-09-11T05:29:12Z | 2022-06-01T15:11:43Z | 2022-06-01T15:11:43Z | NONE | null | null | null | Hi, I'm trying to load a dataset from Dataframe, but I get the error:
```bash
---------------------------------------------------------------------------
ArrowCapacityError Traceback (most recent call last)
<ipython-input-7-146b6b495963> in <module>
----> 1 dataset = Dataset.from_pandas(emb)... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/611/timeline | null | completed | false | [
"Can you give us stats/information on your pandas DataFrame?",
"```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n... |
https://api.github.com/repos/huggingface/datasets/issues/610 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/610/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/610/comments | https://api.github.com/repos/huggingface/datasets/issues/610/events | https://github.com/huggingface/datasets/issues/610 | 698,349,388 | MDU6SXNzdWU2OTgzNDkzODg= | 610 | Load text file for RoBERTa pre-training. | {
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"events_url": "https://api.github.com/users/chiyuzhang94/events{/privacy}",
"followers_url": "https://api.github.com/users/chiyuzhang94/followers",
"following_url": "https://api.github.com/users/chiyuzhang94/following{/other_user}",
"gist... | [] | closed | false | null | [] | null | 43 | 2020-09-10T18:41:38Z | 2022-11-22T13:51:24Z | 2022-11-22T13:51:23Z | NONE | null | null | null | I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444
I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file.
According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/610/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/610/timeline | null | completed | false | [
"Could you try\r\n```python\r\nload_dataset('text', data_files='test.txt',cache_dir=\"./\", split=\"train\")\r\n```\r\n?\r\n\r\n`load_dataset` returns a dictionary by default, like {\"train\": your_dataset}",
"Hi @lhoestq\r\nThanks for your suggestion.\r\n\r\nI tried \r\n```\r\ndataset = load_dataset('text', data... |
https://api.github.com/repos/huggingface/datasets/issues/609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/609/comments | https://api.github.com/repos/huggingface/datasets/issues/609/events | https://github.com/huggingface/datasets/pull/609 | 698,323,989 | MDExOlB1bGxSZXF1ZXN0NDg0MTc4Nzky | 609 | Update GLUE URLs (now hosted on FB) | {
"avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4",
"events_url": "https://api.github.com/users/jeswan/events{/privacy}",
"followers_url": "https://api.github.com/users/jeswan/followers",
"following_url": "https://api.github.com/users/jeswan/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | 2 | 2020-09-10T18:16:32Z | 2020-09-14T19:06:02Z | 2020-09-14T19:06:01Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/609.diff",
"html_url": "https://github.com/huggingface/datasets/pull/609",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/609.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/609"
} | NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/609/timeline | null | null | true | [
"Thanks for opening this PR :) \r\n\r\nWe changed the name of the lib from nlp to datasets yesterday.\r\nCould you rebase from master and re-generate the dataset_info.json file to fix the name changes ?",
"Rebased changes here: https://github.com/huggingface/datasets/pull/626"
] |
https://api.github.com/repos/huggingface/datasets/issues/608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/608/comments | https://api.github.com/repos/huggingface/datasets/issues/608/events | https://github.com/huggingface/datasets/issues/608 | 698,291,156 | MDU6SXNzdWU2OTgyOTExNTY= | 608 | Don't use the old NYU GLUE dataset URLs | {
"avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4",
"events_url": "https://api.github.com/users/jeswan/events{/privacy}",
"followers_url": "https://api.github.com/users/jeswan/followers",
"following_url": "https://api.github.com/users/jeswan/following{/other_user}",
"gists_url": "https://a... | [] | closed | false | null | [] | null | 1 | 2020-09-10T17:47:02Z | 2020-09-16T06:53:18Z | 2020-09-16T06:53:18Z | CONTRIBUTOR | null | null | null | NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR?
See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/111... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/608/timeline | null | completed | false | [
"Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !"
] |
https://api.github.com/repos/huggingface/datasets/issues/607 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/607/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/607/comments | https://api.github.com/repos/huggingface/datasets/issues/607/events | https://github.com/huggingface/datasets/pull/607 | 698,094,442 | MDExOlB1bGxSZXF1ZXN0NDgzOTcyMDg4 | 607 | Add transmit_format wrapper and tests | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-10T15:03:50Z | 2020-09-10T15:21:48Z | 2020-09-10T15:21:47Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/607.diff",
"html_url": "https://github.com/huggingface/datasets/pull/607",
"merged_at": "2020-09-10T15:21:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/607.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/607... | Same as #605 but using a decorator on-top of dataset transforms that are not in place | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/607/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/607/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/606 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/606/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/606/comments | https://api.github.com/repos/huggingface/datasets/issues/606/events | https://github.com/huggingface/datasets/pull/606 | 698,050,442 | MDExOlB1bGxSZXF1ZXN0NDgzOTMzMDA1 | 606 | Quick fix :) | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 1 | 2020-09-10T14:32:06Z | 2020-09-10T16:18:32Z | 2020-09-10T16:18:30Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/606.diff",
"html_url": "https://github.com/huggingface/datasets/pull/606",
"merged_at": "2020-09-10T16:18:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/606.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/606... | `nlp` => `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 1,
"laugh": 1,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/606/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/606/timeline | null | null | true | [
":heart:"
] |
https://api.github.com/repos/huggingface/datasets/issues/605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/605/comments | https://api.github.com/repos/huggingface/datasets/issues/605/events | https://github.com/huggingface/datasets/pull/605 | 697,887,401 | MDExOlB1bGxSZXF1ZXN0NDgzNzg1Mjc1 | 605 | [Datasets] Transmit format to children | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 1 | 2020-09-10T12:30:18Z | 2023-09-24T09:49:47Z | 2020-09-10T16:15:21Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/605",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/605"
} | Transmit format to children obtained when processing a dataset.
Added a test.
When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/605/timeline | null | null | true | [
"Closing as #607 was merged"
] |
https://api.github.com/repos/huggingface/datasets/issues/604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/604/comments | https://api.github.com/repos/huggingface/datasets/issues/604/events | https://github.com/huggingface/datasets/pull/604 | 697,774,581 | MDExOlB1bGxSZXF1ZXN0NDgzNjgxNTc0 | 604 | Update bucket prefix | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-10T11:01:13Z | 2020-09-10T12:45:33Z | 2020-09-10T12:45:32Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/604.diff",
"html_url": "https://github.com/huggingface/datasets/pull/604",
"merged_at": "2020-09-10T12:45:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/604.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/604... | cc @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/604/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/603/comments | https://api.github.com/repos/huggingface/datasets/issues/603/events | https://github.com/huggingface/datasets/pull/603 | 697,758,750 | MDExOlB1bGxSZXF1ZXN0NDgzNjY2ODk5 | 603 | Set scripts version to master | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-10T10:47:44Z | 2020-09-10T11:02:05Z | 2020-09-10T11:02:04Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/603",
"merged_at": "2020-09-10T11:02:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/603... | By default the scripts version is master, so that if the library is installed with
```
pip install git+http://github.com/huggingface/nlp.git
```
or
```
git clone http://github.com/huggingface/nlp.git
pip install -e ./nlp
```
will use the latest scripts, and not the ones from the previous version. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/603/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/602/comments | https://api.github.com/repos/huggingface/datasets/issues/602/events | https://github.com/huggingface/datasets/pull/602 | 697,636,605 | MDExOlB1bGxSZXF1ZXN0NDgzNTU3NDM0 | 602 | apply offset to indices in multiprocessed map | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-10T08:54:30Z | 2020-09-10T11:03:39Z | 2020-09-10T11:03:37Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/602",
"merged_at": "2020-09-10T11:03:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/602... | Fix #597
I fixed the indices by applying an offset.
I added the case to our tests to make sure it doesn't happen again.
I also added the message proposed by @thomwolf in #597
```python
>>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False)
Done writing 10 ... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/602/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/601/comments | https://api.github.com/repos/huggingface/datasets/issues/601/events | https://github.com/huggingface/datasets/pull/601 | 697,574,848 | MDExOlB1bGxSZXF1ZXN0NDgzNTAzMjAw | 601 | check if trasnformers has PreTrainedTokenizerBase | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-10T07:54:56Z | 2020-09-10T11:01:37Z | 2020-09-10T11:01:36Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/601",
"merged_at": "2020-09-10T11:01:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/601... | Fix #598 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/601/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/601/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/600/comments | https://api.github.com/repos/huggingface/datasets/issues/600/events | https://github.com/huggingface/datasets/issues/600 | 697,496,913 | MDU6SXNzdWU2OTc0OTY5MTM= | 600 | Pickling error when loading dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/17310286?v=4",
"events_url": "https://api.github.com/users/kandorm/events{/privacy}",
"followers_url": "https://api.github.com/users/kandorm/followers",
"following_url": "https://api.github.com/users/kandorm/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 5 | 2020-09-10T06:28:08Z | 2020-09-25T14:31:54Z | 2020-09-25T14:31:54Z | NONE | null | null | null | Hi,
I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as:
```
# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_da... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/600/timeline | null | completed | false | [
"When I change from python3.6 to python3.8, it works! ",
"Does it work when you install `nlp` from source on python 3.6?",
"No, still the pickling error.",
"I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also t... |
https://api.github.com/repos/huggingface/datasets/issues/599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/599/comments | https://api.github.com/repos/huggingface/datasets/issues/599/events | https://github.com/huggingface/datasets/pull/599 | 697,377,786 | MDExOlB1bGxSZXF1ZXN0NDgzMzI3ODQ5 | 599 | Add MATINF dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 2 | 2020-09-10T03:31:09Z | 2023-09-24T09:50:08Z | 2020-09-17T12:17:25Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/599.diff",
"html_url": "https://github.com/huggingface/datasets/pull/599",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/599.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/599"
} | @lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :( | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/599/timeline | null | null | true | [
"Hi ! sorry for the late response\r\n\r\nCould you try to rebase from master ? We changed the named of the library last week so you have to include this change in your code.\r\n\r\nCan you give me more details about the error you get when running the cli command ?\r\n\r\nNote that in case of a manual download you h... |
https://api.github.com/repos/huggingface/datasets/issues/598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/598/comments | https://api.github.com/repos/huggingface/datasets/issues/598/events | https://github.com/huggingface/datasets/issues/598 | 697,156,501 | MDU6SXNzdWU2OTcxNTY1MDE= | 598 | The current version of the package on github has an error when loading dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4",
"events_url": "https://api.github.com/users/zeyuyun1/events{/privacy}",
"followers_url": "https://api.github.com/users/zeyuyun1/followers",
"following_url": "https://api.github.com/users/zeyuyun1/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 3 | 2020-09-09T21:03:23Z | 2020-09-10T06:25:21Z | 2020-09-09T22:57:28Z | NONE | null | null | null | Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):
To recreate the error:
First, installing nlp directly from source:
```
git clone https://github.com/huggingface/nlp.git
cd nlp
pip install -e .
``... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/598/timeline | null | completed | false | [
"Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class",
"I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time... |
https://api.github.com/repos/huggingface/datasets/issues/597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/597/comments | https://api.github.com/repos/huggingface/datasets/issues/597/events | https://github.com/huggingface/datasets/issues/597 | 697,112,029 | MDU6SXNzdWU2OTcxMTIwMjk= | 597 | Indices incorrect with multiprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"events_url": "https://api.github.com/users/joeddav/events{/privacy}",
"followers_url": "https://api.github.com/users/joeddav/followers",
"following_url": "https://api.github.com/users/joeddav/following{/other_user}",
"gists_url": "https:/... | [] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | 2 | 2020-09-09T19:50:56Z | 2020-09-10T11:03:37Z | 2020-09-10T11:03:37Z | CONTRIBUTOR | null | null | null | When `num_proc` > 1, the indices argument passed to the map function is incorrect:
```python
d = load_dataset('imdb', split='test[:1%]')
def fn(x, inds):
print(inds)
return x
d.select(range(10)).map(fn, with_indices=True, batched=True)
# [0, 1]
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
d.select(range(10... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/597/timeline | null | completed | false | [
"I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?",
"Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we ar... |
https://api.github.com/repos/huggingface/datasets/issues/596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/596/comments | https://api.github.com/repos/huggingface/datasets/issues/596/events | https://github.com/huggingface/datasets/pull/596 | 696,928,139 | MDExOlB1bGxSZXF1ZXN0NDgyOTM5MTgw | 596 | [style/quality] Moving to isort 5.0.0 + style/quality on datasets and metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 1 | 2020-09-09T15:47:21Z | 2020-09-10T10:05:04Z | 2020-09-10T10:05:03Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/596.diff",
"html_url": "https://github.com/huggingface/datasets/pull/596",
"merged_at": "2020-09-10T10:05:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/596.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/596... | Move the repo to isort 5.0.0.
Also start testing style/quality on datasets and metrics.
Specific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies.
Maybe we could add this in datasets but while cleaning this I've seen many example of really unused i... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/596/timeline | null | null | true | [
"Ready for review @lhoestq, just updated a few 156 files here"
] |
https://api.github.com/repos/huggingface/datasets/issues/595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/595/comments | https://api.github.com/repos/huggingface/datasets/issues/595/events | https://github.com/huggingface/datasets/issues/595 | 696,892,304 | MDU6SXNzdWU2OTY4OTIzMDQ= | 595 | `Dataset`/`DatasetDict` has no attribute 'save_to_disk' | {
"avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4",
"events_url": "https://api.github.com/users/sudarshan85/events{/privacy}",
"followers_url": "https://api.github.com/users/sudarshan85/followers",
"following_url": "https://api.github.com/users/sudarshan85/following{/other_user}",
"gists_url... | [] | closed | false | null | [] | null | 2 | 2020-09-09T15:01:52Z | 2020-09-09T16:20:19Z | 2020-09-09T16:20:18Z | NONE | null | null | null | Hi,
As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/595/timeline | null | completed | false | [
"`pip install git+https://github.com/huggingface/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?",
"> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\... |
https://api.github.com/repos/huggingface/datasets/issues/594 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/594/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/594/comments | https://api.github.com/repos/huggingface/datasets/issues/594/events | https://github.com/huggingface/datasets/pull/594 | 696,816,893 | MDExOlB1bGxSZXF1ZXN0NDgyODQ1OTc5 | 594 | Fix germeval url | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-09T13:29:35Z | 2020-09-09T13:34:35Z | 2020-09-09T13:34:34Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/594.diff",
"html_url": "https://github.com/huggingface/datasets/pull/594",
"merged_at": "2020-09-09T13:34:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/594.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/594... | Continuation of #593 but without the dummy data hack | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/594/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/594/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/593 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/593/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/593/comments | https://api.github.com/repos/huggingface/datasets/issues/593/events | https://github.com/huggingface/datasets/pull/593 | 696,679,182 | MDExOlB1bGxSZXF1ZXN0NDgyNzI5NTgw | 593 | GermEval 2014: new download urls | {
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"events_url": "https://api.github.com/users/stefan-it/events{/privacy}",
"followers_url": "https://api.github.com/users/stefan-it/followers",
"following_url": "https://api.github.com/users/stefan-it/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 5 | 2020-09-09T10:07:29Z | 2020-09-09T14:16:54Z | 2020-09-09T13:35:15Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/593.diff",
"html_url": "https://github.com/huggingface/datasets/pull/593",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/593.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/593"
} | Hi,
unfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive.
I changed the URLs and bump version from 1.0.0 to 2.0.0. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/593/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/593/timeline | null | null | true | [
"/cc: @vblagoje",
"Closing this one as #594 is merged (same changes except the dummy data hack)",
"Awesome @stefan-it ! @lhoestq how soon can I use the fixed GermEval dataset in HF token classification examples?",
"I've manually updated the script on S3, so you can actually use it right now with\r\n```python\... |
https://api.github.com/repos/huggingface/datasets/issues/592 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/592/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/592/comments | https://api.github.com/repos/huggingface/datasets/issues/592/events | https://github.com/huggingface/datasets/pull/592 | 696,619,986 | MDExOlB1bGxSZXF1ZXN0NDgyNjc4MDkw | 592 | Test in memory and on disk | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-09T08:59:30Z | 2020-09-09T13:50:04Z | 2020-09-09T13:50:03Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/592.diff",
"html_url": "https://github.com/huggingface/datasets/pull/592",
"merged_at": "2020-09-09T13:50:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/592.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/592... | I added test parameters to do every test both in memory and on disk.
I also found a bug in concatenate_dataset thanks to the new tests and fixed it. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/592/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/592/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/591 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/591/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/591/comments | https://api.github.com/repos/huggingface/datasets/issues/591/events | https://github.com/huggingface/datasets/pull/591 | 696,530,413 | MDExOlB1bGxSZXF1ZXN0NDgyNjAxMzc1 | 591 | fix #589 (backward compat) | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-09-09T07:33:13Z | 2020-09-09T08:57:56Z | 2020-09-09T08:57:55Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/591.diff",
"html_url": "https://github.com/huggingface/datasets/pull/591",
"merged_at": "2020-09-09T08:57:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/591.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/591... | Fix #589 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/591/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/591/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/590/comments | https://api.github.com/repos/huggingface/datasets/issues/590/events | https://github.com/huggingface/datasets/issues/590 | 696,501,827 | MDU6SXNzdWU2OTY1MDE4Mjc= | 590 | The process cannot access the file because it is being used by another process (windows) | {
"avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4",
"events_url": "https://api.github.com/users/saareliad/events{/privacy}",
"followers_url": "https://api.github.com/users/saareliad/followers",
"following_url": "https://api.github.com/users/saareliad/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 7 | 2020-09-09T07:01:36Z | 2020-09-25T14:02:28Z | 2020-09-25T14:02:28Z | NONE | null | null | null | Hi, I consistently get the following error when developing in my PC (windows 10):
```
train_dataset = train_dataset.map(convert_to_features, batched=True)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map
shutil.move(tmp_file.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/590/timeline | null | completed | false | [
"Hi, which version of `nlp` are you using?\r\n\r\nBy the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes).\r\nYou can see more informations here #545 and try it by installing from source from the master branch.",
"I'm using version 0.4.0.\r\n\r\n",
... |
https://api.github.com/repos/huggingface/datasets/issues/589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/589/comments | https://api.github.com/repos/huggingface/datasets/issues/589/events | https://github.com/huggingface/datasets/issues/589 | 696,488,447 | MDU6SXNzdWU2OTY0ODg0NDc= | 589 | Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging' | {
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"events_url": "https://api.github.com/users/ksjae/events{/privacy}",
"followers_url": "https://api.github.com/users/ksjae/followers",
"following_url": "https://api.github.com/users/ksjae/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | 0 | 2020-09-09T06:46:53Z | 2020-09-09T08:57:54Z | 2020-09-09T08:57:54Z | NONE | null | null | null |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/589/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/588/comments | https://api.github.com/repos/huggingface/datasets/issues/588/events | https://github.com/huggingface/datasets/pull/588 | 695,249,809 | MDExOlB1bGxSZXF1ZXN0NDgxNTE5NzQx | 588 | Support pathlike obj in load dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-07T16:13:21Z | 2020-09-08T07:45:19Z | 2020-09-08T07:45:18Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/588",
"merged_at": "2020-09-08T07:45:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/588... | Fix #582
(I recreated the PR, I got an issue with git) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/588/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/587/comments | https://api.github.com/repos/huggingface/datasets/issues/587/events | https://github.com/huggingface/datasets/pull/587 | 695,246,018 | MDExOlB1bGxSZXF1ZXN0NDgxNTE2Mzkx | 587 | Support pathlike obj in load dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-07T16:09:16Z | 2020-09-07T16:10:35Z | 2020-09-07T16:10:35Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/587",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/587"
} | Fix #582 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/587/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/586/comments | https://api.github.com/repos/huggingface/datasets/issues/586/events | https://github.com/huggingface/datasets/pull/586 | 695,237,999 | MDExOlB1bGxSZXF1ZXN0NDgxNTA5MzU1 | 586 | Better message when data files is empty | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-07T15:59:57Z | 2020-09-09T09:00:09Z | 2020-09-09T09:00:08Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/586.diff",
"html_url": "https://github.com/huggingface/datasets/pull/586",
"merged_at": "2020-09-09T09:00:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/586.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/586... | Fix #581 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/586/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/585/comments | https://api.github.com/repos/huggingface/datasets/issues/585/events | https://github.com/huggingface/datasets/pull/585 | 695,191,209 | MDExOlB1bGxSZXF1ZXN0NDgxNDY4NTM4 | 585 | Fix select for pyarrow < 1.0.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-07T15:02:52Z | 2020-09-08T07:43:17Z | 2020-09-08T07:43:15Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/585.diff",
"html_url": "https://github.com/huggingface/datasets/pull/585",
"merged_at": "2020-09-08T07:43:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/585.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/585... | Fix #583 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/585/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/584/comments | https://api.github.com/repos/huggingface/datasets/issues/584/events | https://github.com/huggingface/datasets/pull/584 | 695,186,652 | MDExOlB1bGxSZXF1ZXN0NDgxNDY0NjEz | 584 | Use github versioning | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 1 | 2020-09-07T14:58:15Z | 2020-09-09T13:37:35Z | 2020-09-09T13:37:34Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/584",
"merged_at": "2020-09-09T13:37:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/584... | Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version.
To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certai... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/584/timeline | null | null | true | [
"I noticed that datasets like `cnn_dailymail` need the `version` parameter to be passed to its `config_kwargs`.\r\nShall we rename the `version` paramater in `load_dataset` ? Maybe `repo_version` or `script_version` ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/583 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/583/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/583/comments | https://api.github.com/repos/huggingface/datasets/issues/583/events | https://github.com/huggingface/datasets/issues/583 | 695,166,265 | MDU6SXNzdWU2OTUxNjYyNjU= | 583 | ArrowIndexError on Dataset.select | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-07T14:36:29Z | 2020-09-08T07:43:15Z | 2020-09-08T07:43:15Z | MEMBER | null | null | null | If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
rai... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/583/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/583/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/582/comments | https://api.github.com/repos/huggingface/datasets/issues/582/events | https://github.com/huggingface/datasets/issues/582 | 695,126,456 | MDU6SXNzdWU2OTUxMjY0NTY= | 582 | Allow for PathLike objects | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | 0 | 2020-09-07T13:54:51Z | 2020-09-08T07:45:17Z | 2020-09-08T07:45:17Z | CONTRIBUTOR | null | null | null | Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.
```python
files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt"))
dataset = load_dataset("text", data_files=files)
```
Traceback:
```
Traceback (most recent call last):
File "C:/dev/python/dut... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/582/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/582/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/581/comments | https://api.github.com/repos/huggingface/datasets/issues/581/events | https://github.com/huggingface/datasets/issues/581 | 695,120,517 | MDU6SXNzdWU2OTUxMjA1MTc= | 581 | Better error message when input file does not exist | {
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"events_url": "https://api.github.com/users/BramVanroy/events{/privacy}",
"followers_url": "https://api.github.com/users/BramVanroy/followers",
"following_url": "https://api.github.com/users/BramVanroy/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | 0 | 2020-09-07T13:47:59Z | 2020-09-09T09:00:07Z | 2020-09-09T09:00:07Z | CONTRIBUTOR | null | null | null | In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y.
```python
dataset = load_dataset("text", data_files=[])
```
Example err... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/581/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/581/timeline | null | completed | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/580/comments | https://api.github.com/repos/huggingface/datasets/issues/580/events | https://github.com/huggingface/datasets/issues/580 | 694,954,551 | MDU6SXNzdWU2OTQ5NTQ1NTE= | 580 | nlp re-creates already-there caches when using a script, but not within a shell | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | 2 | 2020-09-07T10:23:50Z | 2020-09-07T15:19:09Z | 2020-09-07T14:26:41Z | CONTRIBUTOR | null | null | null | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/580/timeline | null | completed | false | [
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] |
https://api.github.com/repos/huggingface/datasets/issues/579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/579/comments | https://api.github.com/repos/huggingface/datasets/issues/579/events | https://github.com/huggingface/datasets/pull/579 | 694,947,599 | MDExOlB1bGxSZXF1ZXN0NDgxMjU1OTI5 | 579 | Doc metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-09-07T10:15:24Z | 2020-09-10T13:06:11Z | 2020-09-10T13:06:10Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/579",
"merged_at": "2020-09-10T13:06:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/579... | Adding documentation on metrics loading/using/sharing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/579/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/578/comments | https://api.github.com/repos/huggingface/datasets/issues/578/events | https://github.com/huggingface/datasets/pull/578 | 694,849,940 | MDExOlB1bGxSZXF1ZXN0NDgxMTczNDE0 | 578 | Add CommonGen Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 0 | 2020-09-07T08:17:17Z | 2020-09-07T11:50:29Z | 2020-09-07T11:49:07Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/578.diff",
"html_url": "https://github.com/huggingface/datasets/pull/578",
"merged_at": "2020-09-07T11:49:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/578.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/578... | CC Authors:
@yuchenlin @MichaelZhouwang | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/578/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/578/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/577 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/577/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/577/comments | https://api.github.com/repos/huggingface/datasets/issues/577/events | https://github.com/huggingface/datasets/issues/577 | 694,607,148 | MDU6SXNzdWU2OTQ2MDcxNDg= | 577 | Some languages in wikipedia dataset are not loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 16 | 2020-09-07T01:16:29Z | 2023-04-11T22:50:48Z | 2022-10-11T11:16:04Z | CONTRIBUTOR | null | null | null | Hi,
I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:
```
import nlp
langs = ['ar'. 'af', '... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/577/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/577/timeline | null | completed | false | [
"Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for langua... |
https://api.github.com/repos/huggingface/datasets/issues/576 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/576/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/576/comments | https://api.github.com/repos/huggingface/datasets/issues/576/events | https://github.com/huggingface/datasets/pull/576 | 694,348,645 | MDExOlB1bGxSZXF1ZXN0NDgwNzM3NDQ1 | 576 | Fix the code block in doc | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 1 | 2020-09-06T11:40:55Z | 2020-09-07T07:37:32Z | 2020-09-07T07:37:18Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/576.diff",
"html_url": "https://github.com/huggingface/datasets/pull/576",
"merged_at": "2020-09-07T07:37:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/576.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/576... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/576/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/576/timeline | null | null | true | [
"thanks :)"
] | |
https://api.github.com/repos/huggingface/datasets/issues/575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/575/comments | https://api.github.com/repos/huggingface/datasets/issues/575/events | https://github.com/huggingface/datasets/issues/575 | 693,691,611 | MDU6SXNzdWU2OTM2OTE2MTE= | 575 | Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. | {
"avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4",
"events_url": "https://api.github.com/users/sudarshan85/events{/privacy}",
"followers_url": "https://api.github.com/users/sudarshan85/followers",
"following_url": "https://api.github.com/users/sudarshan85/following{/other_user}",
"gists_url... | [] | closed | false | null | [] | null | 6 | 2020-09-04T21:46:25Z | 2020-09-22T10:41:36Z | 2020-09-22T10:41:36Z | NONE | null | null | null | Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/575/timeline | null | completed | false | [
"Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.",
"Thanks for the report, I'll give a look!",
"I am also seeing a similar err... |
https://api.github.com/repos/huggingface/datasets/issues/574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/574/comments | https://api.github.com/repos/huggingface/datasets/issues/574/events | https://github.com/huggingface/datasets/pull/574 | 693,364,853 | MDExOlB1bGxSZXF1ZXN0NDc5ODU5NzQy | 574 | Add modules cache | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 2 | 2020-09-04T16:30:03Z | 2020-09-22T10:27:08Z | 2020-09-07T09:01:35Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/574.diff",
"html_url": "https://github.com/huggingface/datasets/pull/574",
"merged_at": "2020-09-07T09:01:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/574.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/574... | As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions.
I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`.
In this directory, a module `nlp_modules` is created so that datasets can ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/574/timeline | null | null | true | [
"All the tests pass on my side. Not sure if it is a cache issue or a pytest issue or a circleci issue.\r\nEDIT: I have the same error on google colab. Trying to fix that",
"I think I fixed it (sorry didn't notice you were on it as well)"
] |
https://api.github.com/repos/huggingface/datasets/issues/573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/573/comments | https://api.github.com/repos/huggingface/datasets/issues/573/events | https://github.com/huggingface/datasets/pull/573 | 693,091,790 | MDExOlB1bGxSZXF1ZXN0NDc5NjE4Mzc2 | 573 | Faster caching for text dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-04T11:58:34Z | 2020-09-04T12:53:24Z | 2020-09-04T12:53:23Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/573.diff",
"html_url": "https://github.com/huggingface/datasets/pull/573",
"merged_at": "2020-09-04T12:53:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/573.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/573... | As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time.
To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/573/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/572/comments | https://api.github.com/repos/huggingface/datasets/issues/572/events | https://github.com/huggingface/datasets/pull/572 | 692,598,231 | MDExOlB1bGxSZXF1ZXN0NDc5MTgyNDU3 | 572 | Add CLUE Benchmark (11 datasets) | {
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"events_url": "https://api.github.com/users/JetRunner/events{/privacy}",
"followers_url": "https://api.github.com/users/JetRunner/followers",
"following_url": "https://api.github.com/users/JetRunner/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 3 | 2020-09-04T01:57:40Z | 2020-09-07T09:59:11Z | 2020-09-07T09:59:10Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/572",
"merged_at": "2020-09-07T09:59:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/572... | Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/572/timeline | null | null | true | [
"Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https://github.com/huggingface/nlp/pull/572/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? ",
"I believe CI failure is unrelated.",
"Great job! "
] |
https://api.github.com/repos/huggingface/datasets/issues/571 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/571/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/571/comments | https://api.github.com/repos/huggingface/datasets/issues/571/events | https://github.com/huggingface/datasets/pull/571 | 692,109,287 | MDExOlB1bGxSZXF1ZXN0NDc4NzQ2MjMz | 571 | Serialization | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 4 | 2020-09-03T16:21:38Z | 2020-09-07T07:46:08Z | 2020-09-07T07:46:07Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/571.diff",
"html_url": "https://github.com/huggingface/datasets/pull/571",
"merged_at": "2020-09-07T07:46:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/571.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/571... | I added `save` and `load` method to serialize/deserialize a dataset object in a folder.
It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`.
Example:
```python
import ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/571/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/571/timeline | null | null | true | [
"I've added save/load for dataset dicts.\r\n\r\nI agree that in the future we should also have a way to save indexes too, and also the in-place history of transforms.\r\n\r\nAlso I understand that it would be cool to have the load function directly at the root of the library, but I'm not sure this should be inside ... |
https://api.github.com/repos/huggingface/datasets/issues/570 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/570/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/570/comments | https://api.github.com/repos/huggingface/datasets/issues/570/events | https://github.com/huggingface/datasets/pull/570 | 691,846,397 | MDExOlB1bGxSZXF1ZXN0NDc4NTI3OTQz | 570 | add reuters21578 dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2020-09-03T10:25:47Z | 2020-09-03T10:46:52Z | 2020-09-03T10:46:51Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/570.diff",
"html_url": "https://github.com/huggingface/datasets/pull/570",
"merged_at": "2020-09-03T10:46:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/570.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/570... | Reopen a PR this the merge. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/570/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/570/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/569 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/569/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/569/comments | https://api.github.com/repos/huggingface/datasets/issues/569/events | https://github.com/huggingface/datasets/pull/569 | 691,832,720 | MDExOlB1bGxSZXF1ZXN0NDc4NTE2Mzc2 | 569 | Revert "add reuters21578 dataset" | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2020-09-03T10:06:16Z | 2020-09-03T10:07:13Z | 2020-09-03T10:07:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/569.diff",
"html_url": "https://github.com/huggingface/datasets/pull/569",
"merged_at": "2020-09-03T10:07:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/569.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/569... | Reverts huggingface/nlp#471 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/569/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/569/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/568/comments | https://api.github.com/repos/huggingface/datasets/issues/568/events | https://github.com/huggingface/datasets/issues/568 | 691,638,656 | MDU6SXNzdWU2OTE2Mzg2NTY= | 568 | `metric.compute` throws `ArrowInvalid` error | {
"avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4",
"events_url": "https://api.github.com/users/ibeltagy/events{/privacy}",
"followers_url": "https://api.github.com/users/ibeltagy/followers",
"following_url": "https://api.github.com/users/ibeltagy/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 3 | 2020-09-03T04:56:57Z | 2020-10-05T16:33:53Z | 2020-10-05T16:33:53Z | NONE | null | null | null | I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_st... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/568/timeline | null | completed | false | [
"Hmm might be related to what we are solving in #564",
"Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ",
"Closin... |
https://api.github.com/repos/huggingface/datasets/issues/567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/567/comments | https://api.github.com/repos/huggingface/datasets/issues/567/events | https://github.com/huggingface/datasets/pull/567 | 691,430,245 | MDExOlB1bGxSZXF1ZXN0NDc4MTc2Njgx | 567 | Fix BLEURT metrics for backward compatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-09-02T21:22:35Z | 2020-09-03T07:29:52Z | 2020-09-03T07:29:50Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/567",
"merged_at": "2020-09-03T07:29:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/567... | Fix #565 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/567/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/566/comments | https://api.github.com/repos/huggingface/datasets/issues/566/events | https://github.com/huggingface/datasets/pull/566 | 691,160,208 | MDExOlB1bGxSZXF1ZXN0NDc3OTM2NTIz | 566 | Remove logger pickling to fix gg colab issues | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-02T16:16:21Z | 2020-09-03T16:31:53Z | 2020-09-03T16:31:52Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/566.diff",
"html_url": "https://github.com/huggingface/datasets/pull/566",
"merged_at": "2020-09-03T16:31:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/566.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/566... | A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.
It creates some issues in google colab right now.
Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/566/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/565/comments | https://api.github.com/repos/huggingface/datasets/issues/565/events | https://github.com/huggingface/datasets/issues/565 | 691,039,121 | MDU6SXNzdWU2OTEwMzkxMjE= | 565 | No module named 'nlp.logging' | {
"avatar_url": "https://avatars.githubusercontent.com/u/66633754?v=4",
"events_url": "https://api.github.com/users/melody-ju/events{/privacy}",
"followers_url": "https://api.github.com/users/melody-ju/followers",
"following_url": "https://api.github.com/users/melody-ju/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 2 | 2020-09-02T13:49:50Z | 2020-09-03T07:29:50Z | 2020-09-03T07:29:50Z | NONE | null | null | null | Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?
```
>>> import nlp
2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/565/timeline | null | completed | false | [
"Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder fro... |
https://api.github.com/repos/huggingface/datasets/issues/564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/564/comments | https://api.github.com/repos/huggingface/datasets/issues/564/events | https://github.com/huggingface/datasets/pull/564 | 691,000,020 | MDExOlB1bGxSZXF1ZXN0NDc3ODAyMTk2 | 564 | Wait for writing in distributed metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 7 | 2020-09-02T12:58:50Z | 2020-09-09T09:13:23Z | 2020-09-09T09:13:22Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/564.diff",
"html_url": "https://github.com/huggingface/datasets/pull/564",
"merged_at": "2020-09-09T09:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/564.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/564... | There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing.
To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/564/timeline | null | null | true | [
"I agree this fix the problem for the CI where the files are always created in a new and clean temporary directory.\r\n\r\nHowever, in a general setting of a succession of fast distributed operation, the files could already exist from previous metrics runs but one process may still finish before another has even st... |
https://api.github.com/repos/huggingface/datasets/issues/563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/563/comments | https://api.github.com/repos/huggingface/datasets/issues/563/events | https://github.com/huggingface/datasets/pull/563 | 690,908,674 | MDExOlB1bGxSZXF1ZXN0NDc3NzI2MTEz | 563 | [Large datasets] Speed up download and processing | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 2 | 2020-09-02T10:31:54Z | 2020-09-09T09:03:33Z | 2020-09-09T09:03:32Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/563.diff",
"html_url": "https://github.com/huggingface/datasets/pull/563",
"merged_at": "2020-09-09T09:03:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/563.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/563... | Various improvements to speed-up creation and processing of large scale datasets.
Currently:
- distributed downloads
- remove etag from datafiles hashes to spare a request when restarting a failed download | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/563/timeline | null | null | true | [
"Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`",
"you're da best"
] |
https://api.github.com/repos/huggingface/datasets/issues/562 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/562/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/562/comments | https://api.github.com/repos/huggingface/datasets/issues/562/events | https://github.com/huggingface/datasets/pull/562 | 690,907,604 | MDExOlB1bGxSZXF1ZXN0NDc3NzI1MjMx | 562 | [Reproductibility] Allow to pin versions of datasets/metrics | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 1 | 2020-09-02T10:30:13Z | 2023-09-24T09:49:42Z | 2020-09-09T13:04:54Z | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/562.diff",
"html_url": "https://github.com/huggingface/datasets/pull/562",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/562.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/562"
} | Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts:
```
dataset = nlp.load_dataset('squad', version='1.0.0')
metric = nlp.load_metric('squad', version='1.0.0')
```
Notes:
- version number are the release version of the library
- curre... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/562/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/562/timeline | null | null | true | [
"Closing this one in favor of #584 "
] |
https://api.github.com/repos/huggingface/datasets/issues/561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/561/comments | https://api.github.com/repos/huggingface/datasets/issues/561/events | https://github.com/huggingface/datasets/pull/561 | 690,871,415 | MDExOlB1bGxSZXF1ZXN0NDc3Njk1NDQy | 561 | Made `share_dataset` more readable | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | 0 | 2020-09-02T09:34:48Z | 2020-09-03T09:00:30Z | 2020-09-03T09:00:29Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/561",
"merged_at": "2020-09-03T09:00:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/561... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/561/timeline | null | null | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/560/comments | https://api.github.com/repos/huggingface/datasets/issues/560/events | https://github.com/huggingface/datasets/issues/560 | 690,488,764 | MDU6SXNzdWU2OTA0ODg3NjQ= | 560 | Using custom DownloadConfig results in an error | {
"avatar_url": "https://avatars.githubusercontent.com/u/1789921?v=4",
"events_url": "https://api.github.com/users/ynouri/events{/privacy}",
"followers_url": "https://api.github.com/users/ynouri/followers",
"following_url": "https://api.github.com/users/ynouri/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | 6 | 2020-09-01T22:23:02Z | 2022-10-04T17:23:45Z | 2022-10-04T17:23:45Z | NONE | null | null | null | ## Version / Environment
Ubuntu 18.04
Python 3.6.8
nlp 0.4.0
## Description
Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.
## How to reprodu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/560/timeline | null | completed | false | [
"From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\... |
https://api.github.com/repos/huggingface/datasets/issues/559 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/559/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/559/comments | https://api.github.com/repos/huggingface/datasets/issues/559/events | https://github.com/huggingface/datasets/pull/559 | 690,411,263 | MDExOlB1bGxSZXF1ZXN0NDc3MzAzOTM2 | 559 | Adding the KILT knowledge source and tasks | {
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | null | 1 | 2020-09-01T20:05:13Z | 2020-09-04T18:05:47Z | 2020-09-04T18:05:47Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/559.diff",
"html_url": "https://github.com/huggingface/datasets/pull/559",
"merged_at": "2020-09-04T18:05:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/559.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/559... | This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with:
```
import nlp
kilt_wikipedia = nlp.load_dataset('kilt_wikipedia')
kilt_tasks = nlp.load_dataset('kilt_tasks')
triviaqa = nlp.load_dataset('trivia_qa',... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/559/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/559/timeline | null | null | true | [
"Feel free to merge when you are happy with it @yjernite :-)"
] |
https://api.github.com/repos/huggingface/datasets/issues/558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/558/comments | https://api.github.com/repos/huggingface/datasets/issues/558/events | https://github.com/huggingface/datasets/pull/558 | 690,318,105 | MDExOlB1bGxSZXF1ZXN0NDc3MjI2ODA0 | 558 | Rerun pip install -e | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-01T17:24:39Z | 2020-09-01T17:24:51Z | 2020-09-01T17:24:50Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/558",
"merged_at": "2020-09-01T17:24:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/558... | Hopefully it fixes the github actions | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/558/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/557 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/557/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/557/comments | https://api.github.com/repos/huggingface/datasets/issues/557/events | https://github.com/huggingface/datasets/pull/557 | 690,220,135 | MDExOlB1bGxSZXF1ZXN0NDc3MTQ1NjAx | 557 | Fix a few typos | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | 0 | 2020-09-01T15:03:24Z | 2020-09-02T07:39:08Z | 2020-09-02T07:39:07Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/557.diff",
"html_url": "https://github.com/huggingface/datasets/pull/557",
"merged_at": "2020-09-02T07:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/557.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/557... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/557/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/557/timeline | null | null | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/556 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/556/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/556/comments | https://api.github.com/repos/huggingface/datasets/issues/556/events | https://github.com/huggingface/datasets/pull/556 | 690,218,423 | MDExOlB1bGxSZXF1ZXN0NDc3MTQ0MTky | 556 | Add DailyDialog | {
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https... | [] | closed | false | null | [] | null | 0 | 2020-09-01T15:01:15Z | 2020-09-03T15:42:03Z | 2020-09-03T15:38:39Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/556.diff",
"html_url": "https://github.com/huggingface/datasets/pull/556",
"merged_at": "2020-09-03T15:38:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/556.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/556... | http://yanran.li/dailydialog.html
https://arxiv.org/pdf/1710.03957.pdf
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/556/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/556/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/555 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/555/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/555/comments | https://api.github.com/repos/huggingface/datasets/issues/555/events | https://github.com/huggingface/datasets/pull/555 | 690,197,725 | MDExOlB1bGxSZXF1ZXN0NDc3MTI2OTIy | 555 | Upgrade pip in benchmark github action | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 0 | 2020-09-01T14:37:26Z | 2020-09-01T15:26:16Z | 2020-09-01T15:26:15Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/555.diff",
"html_url": "https://github.com/huggingface/datasets/pull/555",
"merged_at": "2020-09-01T15:26:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/555.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/555... | It looks like it fixes the `import nlp` issue we have | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/555/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/555/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/554 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/554/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/554/comments | https://api.github.com/repos/huggingface/datasets/issues/554/events | https://github.com/huggingface/datasets/issues/554 | 690,173,214 | MDU6SXNzdWU2OTAxNzMyMTQ= | 554 | nlp downloads to its module path | {
"avatar_url": "https://avatars.githubusercontent.com/u/49398?v=4",
"events_url": "https://api.github.com/users/danieldk/events{/privacy}",
"followers_url": "https://api.github.com/users/danieldk/followers",
"following_url": "https://api.github.com/users/danieldk/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 8 | 2020-09-01T14:06:14Z | 2020-09-11T06:19:24Z | 2020-09-11T06:19:24Z | MEMBER | null | null | null | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/554/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/554/timeline | null | completed | false | [
"Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?",
"> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are in... |
https://api.github.com/repos/huggingface/datasets/issues/553 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/553/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/553/comments | https://api.github.com/repos/huggingface/datasets/issues/553/events | https://github.com/huggingface/datasets/pull/553 | 690,143,182 | MDExOlB1bGxSZXF1ZXN0NDc3MDgxNTg2 | 553 | [Fix GitHub Actions] test adding tmate | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-09-01T13:28:03Z | 2021-05-05T18:24:38Z | 2020-09-03T09:01:13Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/553.diff",
"html_url": "https://github.com/huggingface/datasets/pull/553",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/553.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/553"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/553/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/553/timeline | null | null | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/552/comments | https://api.github.com/repos/huggingface/datasets/issues/552/events | https://github.com/huggingface/datasets/pull/552 | 690,079,429 | MDExOlB1bGxSZXF1ZXN0NDc3MDI4MzMx | 552 | Add multiprocessing | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | null | 10 | 2020-09-01T11:56:17Z | 2020-09-22T15:11:56Z | 2020-09-02T10:01:25Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/552",
"merged_at": "2020-09-02T10:01:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/552... | Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/552/timeline | null | null | true | [
"Logging looks like\r\n\r\n```\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #0 will write at playground/tmp_00000_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #1 will write at playground/tmp_00001_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess... |
https://api.github.com/repos/huggingface/datasets/issues/551 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/551/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/551/comments | https://api.github.com/repos/huggingface/datasets/issues/551/events | https://github.com/huggingface/datasets/pull/551 | 690,034,762 | MDExOlB1bGxSZXF1ZXN0NDc2OTkwNjAw | 551 | added HANS dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"events_url": "https://api.github.com/users/TevenLeScao/events{/privacy}",
"followers_url": "https://api.github.com/users/TevenLeScao/followers",
"following_url": "https://api.github.com/users/TevenLeScao/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | null | 0 | 2020-09-01T10:42:02Z | 2020-09-01T12:17:10Z | 2020-09-01T12:17:10Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/551.diff",
"html_url": "https://github.com/huggingface/datasets/pull/551",
"merged_at": "2020-09-01T12:17:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/551.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/551... | Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/551/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/551/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/550 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/550/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/550/comments | https://api.github.com/repos/huggingface/datasets/issues/550/events | https://github.com/huggingface/datasets/pull/550 | 689,775,914 | MDExOlB1bGxSZXF1ZXN0NDc2NzgyNDY1 | 550 | [BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539) | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 2 | 2020-09-01T03:27:03Z | 2020-09-03T09:06:01Z | 2020-09-03T09:06:01Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/550.diff",
"html_url": "https://github.com/huggingface/datasets/pull/550",
"merged_at": "2020-09-03T09:06:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/550.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/550... | Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_co... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/550/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/550/timeline | null | null | true | [
"Thanks a lot for that!\r\nThe line you are mentioning is a bug indeed, do you mind fixing it at the same time?",
"No worries! \r\n\r\nI pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previou... |
https://api.github.com/repos/huggingface/datasets/issues/549 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/549/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/549/comments | https://api.github.com/repos/huggingface/datasets/issues/549/events | https://github.com/huggingface/datasets/pull/549 | 689,766,465 | MDExOlB1bGxSZXF1ZXN0NDc2Nzc0OTI1 | 549 | Fix bleurt logging import | {
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"events_url": "https://api.github.com/users/jbragg/events{/privacy}",
"followers_url": "https://api.github.com/users/jbragg/followers",
"following_url": "https://api.github.com/users/jbragg/following{/other_user}",
"gists_url": "https://ap... | [] | closed | false | null | [] | null | 2 | 2020-09-01T03:01:25Z | 2020-09-03T18:04:46Z | 2020-09-03T09:04:20Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/549.diff",
"html_url": "https://github.com/huggingface/datasets/pull/549",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/549.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/549"
} | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/549/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/549/timeline | null | null | true | [
"That’s a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLet’s update this in the coming release.",
"Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release)."
... |
https://api.github.com/repos/huggingface/datasets/issues/548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/548/comments | https://api.github.com/repos/huggingface/datasets/issues/548/events | https://github.com/huggingface/datasets/pull/548 | 689,285,996 | MDExOlB1bGxSZXF1ZXN0NDc2MzYzMjU1 | 548 | [Breaking] Switch text loading to multi-threaded PyArrow loading | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 5 | 2020-08-31T15:15:41Z | 2020-09-08T10:19:58Z | 2020-09-08T10:19:57Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/548.diff",
"html_url": "https://github.com/huggingface/datasets/pull/548",
"merged_at": "2020-09-08T10:19:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/548.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/548... | Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/548/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/548/timeline | null | null | true | [
"Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` ... |
https://api.github.com/repos/huggingface/datasets/issues/547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/547/comments | https://api.github.com/repos/huggingface/datasets/issues/547/events | https://github.com/huggingface/datasets/pull/547 | 689,268,589 | MDExOlB1bGxSZXF1ZXN0NDc2MzQ4OTk5 | 547 | [Distributed] Making loading distributed datasets a bit safer | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-08-31T14:51:34Z | 2020-08-31T15:16:30Z | 2020-08-31T15:16:29Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/547",
"merged_at": "2020-08-31T15:16:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/547... | Add some file-locks during dataset loading | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/547/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/546 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/546/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/546/comments | https://api.github.com/repos/huggingface/datasets/issues/546/events | https://github.com/huggingface/datasets/issues/546 | 689,186,526 | MDU6SXNzdWU2ODkxODY1MjY= | 546 | Very slow data loading on large dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4",
"events_url": "https://api.github.com/users/agemagician/events{/privacy}",
"followers_url": "https://api.github.com/users/agemagician/followers",
"following_url": "https://api.github.com/users/agemagician/following{/other_user}",
"gists_ur... | [] | closed | false | null | [] | null | 28 | 2020-08-31T12:57:23Z | 2024-01-02T20:26:24Z | 2020-09-08T10:19:57Z | NONE | null | null | null | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_fil... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/546/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/546/timeline | null | completed | false | [
"When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much... |
https://api.github.com/repos/huggingface/datasets/issues/545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/545/comments | https://api.github.com/repos/huggingface/datasets/issues/545/events | https://github.com/huggingface/datasets/issues/545 | 689,138,878 | MDU6SXNzdWU2ODkxMzg4Nzg= | 545 | New release coming up for this library | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 1 | 2020-08-31T11:37:38Z | 2021-01-13T10:59:04Z | 2021-01-13T10:59:04Z | MEMBER | null | null | null | Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support f... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/545/timeline | null | completed | false | [
"Update: release is planed mid-next week."
] |
https://api.github.com/repos/huggingface/datasets/issues/544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/544/comments | https://api.github.com/repos/huggingface/datasets/issues/544/events | https://github.com/huggingface/datasets/pull/544 | 689,062,519 | MDExOlB1bGxSZXF1ZXN0NDc2MTc4MDM2 | 544 | [Distributed] Fix load_dataset error when multiprocessing + add test | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-08-31T09:30:10Z | 2020-08-31T11:15:11Z | 2020-08-31T11:15:10Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/544.diff",
"html_url": "https://github.com/huggingface/datasets/pull/544",
"merged_at": "2020-08-31T11:15:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/544.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/544... | Fix #543 + add test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/544/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/544/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/543 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/543/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/543/comments | https://api.github.com/repos/huggingface/datasets/issues/543/events | https://github.com/huggingface/datasets/issues/543 | 688,644,407 | MDU6SXNzdWU2ODg2NDQ0MDc= | 543 | nlp.load_dataset is not safe for multi processes when loading from local files | {
"avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4",
"events_url": "https://api.github.com/users/luyug/events{/privacy}",
"followers_url": "https://api.github.com/users/luyug/followers",
"following_url": "https://api.github.com/users/luyug/following{/other_user}",
"gists_url": "https://api.... | [] | closed | false | null | [] | null | 1 | 2020-08-30T03:20:34Z | 2020-08-31T11:15:10Z | 2020-08-31T11:15:10Z | NONE | null | null | null | Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`
concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438
Likel... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/543/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/543/timeline | null | completed | false | [
"I'll take a look!"
] |
https://api.github.com/repos/huggingface/datasets/issues/542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/542/comments | https://api.github.com/repos/huggingface/datasets/issues/542/events | https://github.com/huggingface/datasets/pull/542 | 688,555,036 | MDExOlB1bGxSZXF1ZXN0NDc1NzkyNTY0 | 542 | Add TensorFlow example | {
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"events_url": "https://api.github.com/users/jplu/events{/privacy}",
"followers_url": "https://api.github.com/users/jplu/followers",
"following_url": "https://api.github.com/users/jplu/following{/other_user}",
"gists_url": "https://api.githu... | [] | closed | false | null | [] | null | 0 | 2020-08-29T15:39:27Z | 2020-08-31T09:49:20Z | 2020-08-31T09:49:19Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/542.diff",
"html_url": "https://github.com/huggingface/datasets/pull/542",
"merged_at": "2020-08-31T09:49:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/542.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/542... | Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/542/timeline | null | null | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/541/comments | https://api.github.com/repos/huggingface/datasets/issues/541/events | https://github.com/huggingface/datasets/issues/541 | 688,521,224 | MDU6SXNzdWU2ODg1MjEyMjQ= | 541 | Best practices for training tokenizers with nlp | {
"avatar_url": "https://avatars.githubusercontent.com/u/11806234?v=4",
"events_url": "https://api.github.com/users/moskomule/events{/privacy}",
"followers_url": "https://api.github.com/users/moskomule/followers",
"following_url": "https://api.github.com/users/moskomule/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | null | 1 | 2020-08-29T12:06:49Z | 2022-10-04T17:28:04Z | 2022-10-04T17:28:04Z | NONE | null | null | null | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/541/timeline | null | completed | false | [
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] |
https://api.github.com/repos/huggingface/datasets/issues/540 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/540/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/540/comments | https://api.github.com/repos/huggingface/datasets/issues/540/events | https://github.com/huggingface/datasets/pull/540 | 688,475,884 | MDExOlB1bGxSZXF1ZXN0NDc1NzMzNzMz | 540 | [BUGFIX] Fix Race Dataset Checksum bug | {
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"events_url": "https://api.github.com/users/abarbosa94/events{/privacy}",
"followers_url": "https://api.github.com/users/abarbosa94/followers",
"following_url": "https://api.github.com/users/abarbosa94/following{/other_user}",
"gists_url":... | [] | closed | false | null | [] | null | 4 | 2020-08-29T07:00:10Z | 2020-09-18T11:42:20Z | 2020-09-18T11:42:20Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/540.diff",
"html_url": "https://github.com/huggingface/datasets/pull/540",
"merged_at": "2020-09-18T11:42:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/540.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/540... | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/540/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/540/timeline | null | null | true | [
"I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?"... |
https://api.github.com/repos/huggingface/datasets/issues/539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/539/comments | https://api.github.com/repos/huggingface/datasets/issues/539/events | https://github.com/huggingface/datasets/issues/539 | 688,323,602 | MDU6SXNzdWU2ODgzMjM2MDI= | 539 | [Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | {
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 3 | 2020-08-28T19:55:51Z | 2020-09-03T16:34:02Z | 2020-09-03T16:34:01Z | CONTRIBUTOR | null | null | null | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/539/timeline | null | completed | false | [
"Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) ... |
https://api.github.com/repos/huggingface/datasets/issues/538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/538/comments | https://api.github.com/repos/huggingface/datasets/issues/538/events | https://github.com/huggingface/datasets/pull/538 | 688,015,912 | MDExOlB1bGxSZXF1ZXN0NDc1MzU3MjY2 | 538 | [logging] Add centralized logging - Bump-up cache loads to warnings | {
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"events_url": "https://api.github.com/users/thomwolf/events{/privacy}",
"followers_url": "https://api.github.com/users/thomwolf/followers",
"following_url": "https://api.github.com/users/thomwolf/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | null | 0 | 2020-08-28T11:42:29Z | 2020-08-31T11:42:51Z | 2020-08-31T11:42:51Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/538.diff",
"html_url": "https://github.com/huggingface/datasets/pull/538",
"merged_at": "2020-08-31T11:42:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/538.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/538... | Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO).
You can use:
```
nlp.logging.set_verbosity(verbosity: int)
nlp.logging.set_verbosity_info()
nlp.logging.set_verbosity_warning()
nlp.logging.set_verbosity_debug... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/538/timeline | null | null | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.