url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
2.19B
node_id
stringlengths
18
24
number
int64
2
6.73k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[s]
updated_at
timestamp[s]
closed_at
timestamp[s]
author_association
stringclasses
3 values
active_lock_reason
null
draft
null
pull_request
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
3 values
https://api.github.com/repos/huggingface/datasets/issues/629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/629/comments
https://api.github.com/repos/huggingface/datasets/issues/629/events
https://github.com/huggingface/datasets/issues/629
701,517,550
MDU6SXNzdWU3MDE1MTc1NTA=
629
straddling object straddles two block boundaries
{ "login": "bharaniabhishek123", "id": 17970177, "node_id": "MDQ6VXNlcjE3OTcwMTc3", "avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4", "gravatar_id": "", "url": "https://api.github.com/users/bharaniabhishek123", "html_url": "https://github.com/bharaniabhishek123", "followers_url": "ht...
[]
closed
false
null
[]
null
[ "sorry it's an apache arrow issue." ]
2020-09-15T00:30:46
2020-09-15T00:36:17
2020-09-15T00:32:17
NONE
null
null
null
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", li...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/629/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/629/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/625/comments
https://api.github.com/repos/huggingface/datasets/issues/625/events
https://github.com/huggingface/datasets/issues/625
701,057,799
MDU6SXNzdWU3MDEwNTc3OTk=
625
dtype of tensors should be preserved
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd t...
2020-09-14T12:38:05
2021-08-17T08:30:04
2021-08-17T08:30:04
CONTRIBUTOR
null
null
null
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/625/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/625/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/624/comments
https://api.github.com/repos/huggingface/datasets/issues/624/events
https://github.com/huggingface/datasets/issues/624
700,541,628
MDU6SXNzdWU3MDA1NDE2Mjg=
624
Add learningq dataset
{ "login": "krrishdholakia", "id": 17561003, "node_id": "MDQ6VXNlcjE3NTYxMDAz", "avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4", "gravatar_id": "", "url": "https://api.github.com/users/krrishdholakia", "html_url": "https://github.com/krrishdholakia", "followers_url": "https://api.gi...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[]
2020-09-13T10:20:27
2020-09-14T09:50:02
null
NONE
null
null
null
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/624/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/624/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/623/comments
https://api.github.com/repos/huggingface/datasets/issues/623/events
https://github.com/huggingface/datasets/issues/623
700,235,308
MDU6SXNzdWU3MDAyMzUzMDg=
623
Custom feature types in `load_dataset` from CSV
{ "login": "lvwerra", "id": 8264887, "node_id": "MDQ6VXNlcjgyNjQ4ODc=", "avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lvwerra", "html_url": "https://github.com/lvwerra", "followers_url": "https://api.github.com/users/lvwerra/...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label...
2020-09-12T13:21:34
2020-09-30T19:51:43
2020-09-30T08:39:54
MEMBER
null
null
null
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/623/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/623/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/622/comments
https://api.github.com/repos/huggingface/datasets/issues/622/events
https://github.com/huggingface/datasets/issues/622
700,225,826
MDU6SXNzdWU3MDAyMjU4MjY=
622
load_dataset for text files not working
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Can you give us more information on your os and pip environments (pip list)?", "@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2...
2020-09-12T12:49:28
2020-10-28T11:07:31
2020-10-28T11:07:30
CONTRIBUTOR
null
null
null
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/622/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/622/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/620/comments
https://api.github.com/repos/huggingface/datasets/issues/620/events
https://github.com/huggingface/datasets/issues/620
699,815,135
MDU6SXNzdWU2OTk4MTUxMzU=
620
map/filter multiprocessing raises errors and corrupts datasets
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.g...
[ { "id": 1935892857, "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug", "name": "bug", "color": "d73a4a", "default": true, "description": "Something isn't working" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = col...
2020-09-11T22:30:06
2020-10-08T16:31:47
2020-10-08T16:31:46
NONE
null
null
null
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/620/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/620/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/619/comments
https://api.github.com/repos/huggingface/datasets/issues/619/events
https://github.com/huggingface/datasets/issues/619
699,733,612
MDU6SXNzdWU2OTk3MzM2MTI=
619
Mistakes in MLQA features names
{ "login": "M-Salti", "id": 9285264, "node_id": "MDQ6VXNlcjkyODUyNjQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4", "gravatar_id": "", "url": "https://api.github.com/users/M-Salti", "html_url": "https://github.com/M-Salti", "followers_url": "https://api.github.com/users/M-Salti/...
[]
closed
false
null
[]
null
[ "Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?" ]
2020-09-11T20:46:23
2020-09-16T06:59:19
2020-09-16T06:59:19
CONTRIBUTOR
null
null
null
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/619/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/619/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/617/comments
https://api.github.com/repos/huggingface/datasets/issues/617/events
https://github.com/huggingface/datasets/issues/617
699,472,596
MDU6SXNzdWU2OTk0NzI1OTY=
617
Compare different Rouge implementations
{ "login": "ibeltagy", "id": 2287797, "node_id": "MDQ6VXNlcjIyODc3OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibeltagy", "html_url": "https://github.com/ibeltagy", "followers_url": "https://api.github.com/users/ibelt...
[]
closed
false
null
[]
null
[ "Updates - the differences between the following three\r\n(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)\r\n(2) https://github.com/google-research/google-research/tree/master/rouge\r\n(3) https://github.com/pltrdy/files2rouge (used in fairseq)\r\ncan be explained by two t...
2020-09-11T15:49:32
2023-03-22T12:08:44
2020-10-02T09:52:18
NONE
null
null
null
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/617/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/617/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/616/comments
https://api.github.com/repos/huggingface/datasets/issues/616/events
https://github.com/huggingface/datasets/issues/616
699,462,293
MDU6SXNzdWU2OTk0NjIyOTM=
616
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users...
[]
open
false
null
[]
null
[ "I have the same issue", "Same issue here when Trying to load a dataset from disk.", "I am also experiencing this issue, and don't know if it's affecting my training.", "Same here. I hope the dataset is not being modified in-place.", "I think the only way to avoid this warning would be to do a copy of the n...
2020-09-11T15:39:16
2021-07-22T21:12:21
null
CONTRIBUTOR
null
null
null
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/616/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 4 }
https://api.github.com/repos/huggingface/datasets/issues/616/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/615/comments
https://api.github.com/repos/huggingface/datasets/issues/615/events
https://github.com/huggingface/datasets/issues/615
699,410,773
MDU6SXNzdWU2OTk0MTA3NzM=
615
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_in...
2020-09-11T14:50:38
2023-09-21T07:59:23
2020-09-19T16:46:31
MEMBER
null
null
null
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-38...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/615/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/615/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/611/comments
https://api.github.com/repos/huggingface/datasets/issues/611/events
https://github.com/huggingface/datasets/issues/611
698,863,988
MDU6SXNzdWU2OTg4NjM5ODg=
611
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
{ "login": "sangyx", "id": 32364921, "node_id": "MDQ6VXNlcjMyMzY0OTIx", "avatar_url": "https://avatars.githubusercontent.com/u/32364921?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sangyx", "html_url": "https://github.com/sangyx", "followers_url": "https://api.github.com/users/sangyx/fo...
[]
closed
false
null
[]
null
[ "Can you give us stats/information on your pandas DataFrame?", "```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n...
2020-09-11T05:29:12
2022-06-01T15:11:43
2022-06-01T15:11:43
NONE
null
null
null
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/611/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/611/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/610/comments
https://api.github.com/repos/huggingface/datasets/issues/610/events
https://github.com/huggingface/datasets/issues/610
698,349,388
MDU6SXNzdWU2OTgzNDkzODg=
610
Load text file for RoBERTa pre-training.
{ "login": "chiyuzhang94", "id": 33407613, "node_id": "MDQ6VXNlcjMzNDA3NjEz", "avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chiyuzhang94", "html_url": "https://github.com/chiyuzhang94", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "Could you try\r\n```python\r\nload_dataset('text', data_files='test.txt',cache_dir=\"./\", split=\"train\")\r\n```\r\n?\r\n\r\n`load_dataset` returns a dictionary by default, like {\"train\": your_dataset}", "Hi @lhoestq\r\nThanks for your suggestion.\r\n\r\nI tried \r\n```\r\ndataset = load_dataset('text', data...
2020-09-10T18:41:38
2022-11-22T13:51:24
2022-11-22T13:51:23
NONE
null
null
null
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/610/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/610/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/608/comments
https://api.github.com/repos/huggingface/datasets/issues/608/events
https://github.com/huggingface/datasets/issues/608
698,291,156
MDU6SXNzdWU2OTgyOTExNTY=
608
Don't use the old NYU GLUE dataset URLs
{ "login": "jeswan", "id": 57466294, "node_id": "MDQ6VXNlcjU3NDY2Mjk0", "avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jeswan", "html_url": "https://github.com/jeswan", "followers_url": "https://api.github.com/users/jeswan/fo...
[]
closed
false
null
[]
null
[ "Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !" ]
2020-09-10T17:47:02
2020-09-16T06:53:18
2020-09-16T06:53:18
CONTRIBUTOR
null
null
null
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR? See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/111...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/608/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/608/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/600/comments
https://api.github.com/repos/huggingface/datasets/issues/600/events
https://github.com/huggingface/datasets/issues/600
697,496,913
MDU6SXNzdWU2OTc0OTY5MTM=
600
Pickling error when loading dataset
{ "login": "kandorm", "id": 17310286, "node_id": "MDQ6VXNlcjE3MzEwMjg2", "avatar_url": "https://avatars.githubusercontent.com/u/17310286?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kandorm", "html_url": "https://github.com/kandorm", "followers_url": "https://api.github.com/users/kandor...
[]
closed
false
null
[]
null
[ "When I change from python3.6 to python3.8, it works! ", "Does it work when you install `nlp` from source on python 3.6?", "No, still the pickling error.", "I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also t...
2020-09-10T06:28:08
2020-09-25T14:31:54
2020-09-25T14:31:54
NONE
null
null
null
Hi, I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as: ``` # line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size) dataset = load_da...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/600/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/600/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/598/comments
https://api.github.com/repos/huggingface/datasets/issues/598/events
https://github.com/huggingface/datasets/issues/598
697,156,501
MDU6SXNzdWU2OTcxNTY1MDE=
598
The current version of the package on github has an error when loading dataset
{ "login": "zeyuyun1", "id": 43428393, "node_id": "MDQ6VXNlcjQzNDI4Mzkz", "avatar_url": "https://avatars.githubusercontent.com/u/43428393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/zeyuyun1", "html_url": "https://github.com/zeyuyun1", "followers_url": "https://api.github.com/users/zey...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class", "I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time...
2020-09-09T21:03:23
2020-09-10T06:25:21
2020-09-09T22:57:28
NONE
null
null
null
Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine): To recreate the error: First, installing nlp directly from source: ``` git clone https://github.com/huggingface/nlp.git cd nlp pip install -e . ``...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/598/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/598/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/597/comments
https://api.github.com/repos/huggingface/datasets/issues/597/events
https://github.com/huggingface/datasets/issues/597
697,112,029
MDU6SXNzdWU2OTcxMTIwMjk=
597
Indices incorrect with multiprocessing
{ "login": "joeddav", "id": 9353833, "node_id": "MDQ6VXNlcjkzNTM4MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joeddav", "html_url": "https://github.com/joeddav", "followers_url": "https://api.github.com/users/joeddav/...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?", "Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we ar...
2020-09-09T19:50:56
2020-09-10T11:03:37
2020-09-10T11:03:37
CONTRIBUTOR
null
null
null
When `num_proc` > 1, the indices argument passed to the map function is incorrect: ```python d = load_dataset('imdb', split='test[:1%]') def fn(x, inds): print(inds) return x d.select(range(10)).map(fn, with_indices=True, batched=True) # [0, 1] # [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] d.select(range(10...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/597/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/597/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/595/comments
https://api.github.com/repos/huggingface/datasets/issues/595/events
https://github.com/huggingface/datasets/issues/595
696,892,304
MDU6SXNzdWU2OTY4OTIzMDQ=
595
`Dataset`/`DatasetDict` has no attribute 'save_to_disk'
{ "login": "sudarshan85", "id": 488428, "node_id": "MDQ6VXNlcjQ4ODQyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sudarshan85", "html_url": "https://github.com/sudarshan85", "followers_url": "https://api.github.com/user...
[]
closed
false
null
[]
null
[ "`pip install git+https://github.com/huggingface/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?", "> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\...
2020-09-09T15:01:52
2020-09-09T16:20:19
2020-09-09T16:20:18
NONE
null
null
null
Hi, As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/595/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/595/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/590/comments
https://api.github.com/repos/huggingface/datasets/issues/590/events
https://github.com/huggingface/datasets/issues/590
696,501,827
MDU6SXNzdWU2OTY1MDE4Mjc=
590
The process cannot access the file because it is being used by another process (windows)
{ "login": "saareliad", "id": 22762845, "node_id": "MDQ6VXNlcjIyNzYyODQ1", "avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saareliad", "html_url": "https://github.com/saareliad", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Hi, which version of `nlp` are you using?\r\n\r\nBy the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes).\r\nYou can see more informations here #545 and try it by installing from source from the master branch.", "I'm using version 0.4.0.\r\n\r\n", ...
2020-09-09T07:01:36
2020-09-25T14:02:28
2020-09-25T14:02:28
NONE
null
null
null
Hi, I consistently get the following error when developing in my PC (windows 10): ``` train_dataset = train_dataset.map(convert_to_features, batched=True) File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map shutil.move(tmp_file....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/590/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/590/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/589/comments
https://api.github.com/repos/huggingface/datasets/issues/589/events
https://github.com/huggingface/datasets/issues/589
696,488,447
MDU6SXNzdWU2OTY0ODg0NDc=
589
Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging'
{ "login": "ksjae", "id": 17930170, "node_id": "MDQ6VXNlcjE3OTMwMTcw", "avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ksjae", "html_url": "https://github.com/ksjae", "followers_url": "https://api.github.com/users/ksjae/follow...
[]
closed
false
null
[]
null
[]
2020-09-09T06:46:53
2020-09-09T08:57:54
2020-09-09T08:57:54
NONE
null
null
null
``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset builder_cls = import_main_class(module_path, dataset=True) File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/589/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/589/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/583/comments
https://api.github.com/repos/huggingface/datasets/issues/583/events
https://github.com/huggingface/datasets/issues/583
695,166,265
MDU6SXNzdWU2OTUxNjYyNjU=
583
ArrowIndexError on Dataset.select
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
2020-09-07T14:36:29
2020-09-08T07:43:15
2020-09-08T07:43:15
MEMBER
null
null
null
If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0 Example: ```python from nlp import load_dataset mnli = load_dataset("glue", "mnli", split="train") shuffled = mnli.shuffle(seed=42) mnli.select(list(range(len(mnli)))) ``` rai...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/583/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/583/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/582/comments
https://api.github.com/repos/huggingface/datasets/issues/582/events
https://github.com/huggingface/datasets/issues/582
695,126,456
MDU6SXNzdWU2OTUxMjY0NTY=
582
Allow for PathLike objects
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[]
2020-09-07T13:54:51
2020-09-08T07:45:17
2020-09-08T07:45:17
CONTRIBUTOR
null
null
null
Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error. ```python files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt")) dataset = load_dataset("text", data_files=files) ``` Traceback: ``` Traceback (most recent call last): File "C:/dev/python/dut...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/582/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/582/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/581/comments
https://api.github.com/repos/huggingface/datasets/issues/581/events
https://github.com/huggingface/datasets/issues/581
695,120,517
MDU6SXNzdWU2OTUxMjA1MTc=
581
Better error message when input file does not exist
{ "login": "BramVanroy", "id": 2779410, "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "gravatar_id": "", "url": "https://api.github.com/users/BramVanroy", "html_url": "https://github.com/BramVanroy", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[]
2020-09-07T13:47:59
2020-09-09T09:00:07
2020-09-09T09:00:07
CONTRIBUTOR
null
null
null
In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y. ```python dataset = load_dataset("text", data_files=[]) ``` Example err...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/581/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/581/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/580/comments
https://api.github.com/repos/huggingface/datasets/issues/580/events
https://github.com/huggingface/datasets/issues/580
694,954,551
MDU6SXNzdWU2OTQ5NTQ1NTE=
580
nlp re-creates already-there caches when using a script, but not within a shell
{ "login": "TevenLeScao", "id": 26709476, "node_id": "MDQ6VXNlcjI2NzA5NDc2", "avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4", "gravatar_id": "", "url": "https://api.github.com/users/TevenLeScao", "html_url": "https://github.com/TevenLeScao", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)", "Fixed with a clean re-install!" ]
2020-09-07T10:23:50
2020-09-07T15:19:09
2020-09-07T14:26:41
CONTRIBUTOR
null
null
null
`nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell. Example: try running ``` import nlp hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0) hans_hard_data = nlp.load_dataset('hans', s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/580/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/580/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/577/comments
https://api.github.com/repos/huggingface/datasets/issues/577/events
https://github.com/huggingface/datasets/issues/577
694,607,148
MDU6SXNzdWU2OTQ2MDcxNDg=
577
Some languages in wikipedia dataset are not loading
{ "login": "gaguilar", "id": 5833357, "node_id": "MDQ6VXNlcjU4MzMzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaguilar", "html_url": "https://github.com/gaguilar", "followers_url": "https://api.github.com/users/gagui...
[]
closed
false
null
[]
null
[ "Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for langua...
2020-09-07T01:16:29
2023-04-11T22:50:48
2022-10-11T11:16:04
CONTRIBUTOR
null
null
null
Hi, I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them: ``` import nlp langs = ['ar'. 'af', '...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/577/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/577/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/575/comments
https://api.github.com/repos/huggingface/datasets/issues/575/events
https://github.com/huggingface/datasets/issues/575
693,691,611
MDU6SXNzdWU2OTM2OTE2MTE=
575
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading.
{ "login": "sudarshan85", "id": 488428, "node_id": "MDQ6VXNlcjQ4ODQyOA==", "avatar_url": "https://avatars.githubusercontent.com/u/488428?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sudarshan85", "html_url": "https://github.com/sudarshan85", "followers_url": "https://api.github.com/user...
[]
closed
false
null
[]
null
[ "Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.", "Thanks for the report, I'll give a look!", "I am also seeing a similar err...
2020-09-04T21:46:25
2020-09-22T10:41:36
2020-09-22T10:41:36
NONE
null
null
null
Hi, I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset: ``` >>> from nlp import load_dataset >>> dataset = load_dataset('glue', 'mrpc', split='train') ``` However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/575/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/575/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/568/comments
https://api.github.com/repos/huggingface/datasets/issues/568/events
https://github.com/huggingface/datasets/issues/568
691,638,656
MDU6SXNzdWU2OTE2Mzg2NTY=
568
`metric.compute` throws `ArrowInvalid` error
{ "login": "ibeltagy", "id": 2287797, "node_id": "MDQ6VXNlcjIyODc3OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/2287797?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ibeltagy", "html_url": "https://github.com/ibeltagy", "followers_url": "https://api.github.com/users/ibelt...
[]
closed
false
null
[]
null
[ "Hmm might be related to what we are solving in #564", "Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ", "Closin...
2020-09-03T04:56:57
2020-10-05T16:33:53
2020-10-05T16:33:53
NONE
null
null
null
I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0` ``` File "/home/beltagy/trainer.py", line 92, in validation_step rouge_scores = rouge.compute(predictions=generated_str, references=gold_st...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/568/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/568/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/565
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/565/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/565/comments
https://api.github.com/repos/huggingface/datasets/issues/565/events
https://github.com/huggingface/datasets/issues/565
691,039,121
MDU6SXNzdWU2OTEwMzkxMjE=
565
No module named 'nlp.logging'
{ "login": "melody-ju", "id": 66633754, "node_id": "MDQ6VXNlcjY2NjMzNzU0", "avatar_url": "https://avatars.githubusercontent.com/u/66633754?v=4", "gravatar_id": "", "url": "https://api.github.com/users/melody-ju", "html_url": "https://github.com/melody-ju", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder fro...
2020-09-02T13:49:50
2020-09-03T07:29:50
2020-09-03T07:29:50
NONE
null
null
null
Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing? ``` >>> import nlp 2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/565/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/565/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/560
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/560/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/560/comments
https://api.github.com/repos/huggingface/datasets/issues/560/events
https://github.com/huggingface/datasets/issues/560
690,488,764
MDU6SXNzdWU2OTA0ODg3NjQ=
560
Using custom DownloadConfig results in an error
{ "login": "ynouri", "id": 1789921, "node_id": "MDQ6VXNlcjE3ODk5MjE=", "avatar_url": "https://avatars.githubusercontent.com/u/1789921?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ynouri", "html_url": "https://github.com/ynouri", "followers_url": "https://api.github.com/users/ynouri/foll...
[]
closed
false
null
[]
null
[ "From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\...
2020-09-01T22:23:02
2022-10-04T17:23:45
2022-10-04T17:23:45
NONE
null
null
null
## Version / Environment Ubuntu 18.04 Python 3.6.8 nlp 0.4.0 ## Description Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error. ## How to reprodu...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/560/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/560/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/554
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/554/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/554/comments
https://api.github.com/repos/huggingface/datasets/issues/554/events
https://github.com/huggingface/datasets/issues/554
690,173,214
MDU6SXNzdWU2OTAxNzMyMTQ=
554
nlp downloads to its module path
{ "login": "danieldk", "id": 49398, "node_id": "MDQ6VXNlcjQ5Mzk4", "avatar_url": "https://avatars.githubusercontent.com/u/49398?v=4", "gravatar_id": "", "url": "https://api.github.com/users/danieldk", "html_url": "https://github.com/danieldk", "followers_url": "https://api.github.com/users/danieldk/foll...
[]
closed
false
null
[]
null
[ "Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?", "> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are in...
2020-09-01T14:06:14
2020-09-11T06:19:24
2020-09-11T06:19:24
NONE
null
null
null
I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems: ```>>> import nlp >>> squad_dataset = nlp.load_dataset('squad') ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/554/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/554/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/546
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/546/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/546/comments
https://api.github.com/repos/huggingface/datasets/issues/546/events
https://github.com/huggingface/datasets/issues/546
689,186,526
MDU6SXNzdWU2ODkxODY1MjY=
546
Very slow data loading on large dataset
{ "login": "agemagician", "id": 6087313, "node_id": "MDQ6VXNlcjYwODczMTM=", "avatar_url": "https://avatars.githubusercontent.com/u/6087313?v=4", "gravatar_id": "", "url": "https://api.github.com/users/agemagician", "html_url": "https://github.com/agemagician", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
[ "When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much...
2020-08-31T12:57:23
2024-01-02T20:26:24
2020-09-08T10:19:57
NONE
null
null
null
I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data. It has been 8 hours and still, it is on the loading steps. It does work when the text dataset size is small about 1 GB, but it doesn't scale. It also uses a single thread during the data loading step. ``` train_fil...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/546/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/546/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/545
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/545/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/545/comments
https://api.github.com/repos/huggingface/datasets/issues/545/events
https://github.com/huggingface/datasets/issues/545
689,138,878
MDU6SXNzdWU2ODkxMzg4Nzg=
545
New release coming up for this library
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[]
closed
false
null
[]
null
[ "Update: release is planed mid-next week." ]
2020-08-31T11:37:38
2021-01-13T10:59:04
2021-01-13T10:59:04
MEMBER
null
null
null
Hi all, A few words on the roadmap for this library. The next release will be a big one and is planed at the end of this week. In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will: - have support f...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/545/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 4, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/545/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/543
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/543/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/543/comments
https://api.github.com/repos/huggingface/datasets/issues/543/events
https://github.com/huggingface/datasets/issues/543
688,644,407
MDU6SXNzdWU2ODg2NDQ0MDc=
543
nlp.load_dataset is not safe for multi processes when loading from local files
{ "login": "luyug", "id": 55288513, "node_id": "MDQ6VXNlcjU1Mjg4NTEz", "avatar_url": "https://avatars.githubusercontent.com/u/55288513?v=4", "gravatar_id": "", "url": "https://api.github.com/users/luyug", "html_url": "https://github.com/luyug", "followers_url": "https://api.github.com/users/luyug/follow...
[]
closed
false
null
[]
null
[ "I'll take a look!" ]
2020-08-30T03:20:34
2020-08-31T11:15:10
2020-08-31T11:15:10
NONE
null
null
null
Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])` concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438 Likel...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/543/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/543/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/541
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/541/comments
https://api.github.com/repos/huggingface/datasets/issues/541/events
https://github.com/huggingface/datasets/issues/541
688,521,224
MDU6SXNzdWU2ODg1MjEyMjQ=
541
Best practices for training tokenizers with nlp
{ "login": "moskomule", "id": 11806234, "node_id": "MDQ6VXNlcjExODA2MjM0", "avatar_url": "https://avatars.githubusercontent.com/u/11806234?v=4", "gravatar_id": "", "url": "https://api.github.com/users/moskomule", "html_url": "https://github.com/moskomule", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library" ]
2020-08-29T12:06:49
2022-10-04T17:28:04
2022-10-04T17:28:04
NONE
null
null
null
Hi, thank you for developing this library. What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/541/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/539
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/539/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/539/comments
https://api.github.com/repos/huggingface/datasets/issues/539/events
https://github.com/huggingface/datasets/issues/539
688,323,602
MDU6SXNzdWU2ODgzMjM2MDI=
539
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data
{ "login": "gaguilar", "id": 5833357, "node_id": "MDQ6VXNlcjU4MzMzNTc=", "avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gaguilar", "html_url": "https://github.com/gaguilar", "followers_url": "https://api.github.com/users/gagui...
[]
closed
false
null
[]
null
[ "Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) ...
2020-08-28T19:55:51
2020-09-03T16:34:02
2020-09-03T16:34:01
CONTRIBUTOR
null
null
null
Hi, There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset. How can I update the checksum of the library to solve this issue? The error is below and it also appea...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/539/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/539/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/537
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/537/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/537/comments
https://api.github.com/repos/huggingface/datasets/issues/537/events
https://github.com/huggingface/datasets/issues/537
687,614,699
MDU6SXNzdWU2ODc2MTQ2OTk=
537
[Dataset] RACE dataset Checksums error
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an...
2020-08-27T23:58:16
2020-09-18T12:07:04
2020-09-18T12:07:04
CONTRIBUTOR
null
null
null
Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps: ``` dataset = nlp.load_dataset("race") len(dataset["train"]), len(dataset["validation"]) ``` But then I got the following error: ``` ----------------------------------...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/537/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/537/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/534
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/534/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/534/comments
https://api.github.com/repos/huggingface/datasets/issues/534/events
https://github.com/huggingface/datasets/issues/534
686,115,912
MDU6SXNzdWU2ODYxMTU5MTI=
534
`list_datasets()` is broken.
{ "login": "ashutosh-dwivedi-e3502", "id": 314169, "node_id": "MDQ6VXNlcjMxNDE2OQ==", "avatar_url": "https://avatars.githubusercontent.com/u/314169?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashutosh-dwivedi-e3502", "html_url": "https://github.com/ashutosh-dwivedi-e3502", "followers_u...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release", "What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```", "Thanks @lhoestq . " ]
2020-08-26T08:19:01
2020-08-27T06:31:11
2020-08-27T06:31:11
NONE
null
null
null
version = '0.4.0' `list_datasets()` is broken. It results in the following error : ``` In [3]: nlp.list_datasets() Out[3]: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) ~/.virtualenvs/san-lgUCsFg_/lib/py...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/534/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/534/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/532
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/532/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/532/comments
https://api.github.com/repos/huggingface/datasets/issues/532/events
https://github.com/huggingface/datasets/issues/532
685,540,614
MDU6SXNzdWU2ODU1NDA2MTQ=
532
File exists error when used with TPU
{ "login": "go-inoue", "id": 20531705, "node_id": "MDQ6VXNlcjIwNTMxNzA1", "avatar_url": "https://avatars.githubusercontent.com/u/20531705?v=4", "gravatar_id": "", "url": "https://api.github.com/users/go-inoue", "html_url": "https://github.com/go-inoue", "followers_url": "https://api.github.com/users/go-...
[]
open
false
null
[]
null
[ "I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`", "Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the d...
2020-08-25T14:36:38
2020-09-01T12:14:56
null
NONE
null
null
null
Hi, I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8). I modified [line 131 in the original `run_language_modeling.py`](https://github.com/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/532/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/532/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/525/comments
https://api.github.com/repos/huggingface/datasets/issues/525/events
https://github.com/huggingface/datasets/issues/525
683,875,483
MDU6SXNzdWU2ODM4NzU0ODM=
525
wmt download speed example
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/ss...
[]
closed
false
null
[]
null
[ "Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r...
2020-08-21T23:29:06
2022-10-04T17:45:39
2022-10-04T17:45:39
CONTRIBUTOR
null
null
null
Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine. ``` import nlp nlp.load_dataset('wmt16', 'de-en') ``` Downloads at 49.1 K...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/525/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/525/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/524
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/524/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/524/comments
https://api.github.com/repos/huggingface/datasets/issues/524/events
https://github.com/huggingface/datasets/issues/524
683,686,359
MDU6SXNzdWU2ODM2ODYzNTk=
524
Some docs are missing parameter names
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[ "Indeed, good catch!" ]
2020-08-21T16:47:34
2020-08-25T09:04:03
2020-08-25T09:04:03
CONTRIBUTOR
null
null
null
See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/524/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/524/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/522
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/522/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/522/comments
https://api.github.com/repos/huggingface/datasets/issues/522/events
https://github.com/huggingface/datasets/issues/522
682,478,833
MDU6SXNzdWU2ODI0Nzg4MzM=
522
dictionnary typo in docs
{ "login": "yonigottesman", "id": 4004127, "node_id": "MDQ6VXNlcjQwMDQxMjc=", "avatar_url": "https://avatars.githubusercontent.com/u/4004127?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yonigottesman", "html_url": "https://github.com/yonigottesman", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
[ "Thanks!" ]
2020-08-20T07:11:05
2020-08-20T07:52:14
2020-08-20T07:52:13
CONTRIBUTOR
null
null
null
Many places dictionary is spelled dictionnary, not sure if its on purpose or not. Fixed in this pr: https://github.com/huggingface/nlp/pull/521
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/522/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/522/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/519
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/519/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/519/comments
https://api.github.com/repos/huggingface/datasets/issues/519/events
https://github.com/huggingface/datasets/issues/519
682,193,882
MDU6SXNzdWU2ODIxOTM4ODI=
519
[BUG] Metrics throwing new error on master since 0.4.0
{ "login": "jbragg", "id": 2238344, "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbragg", "html_url": "https://github.com/jbragg", "followers_url": "https://api.github.com/users/jbragg/foll...
[]
closed
false
null
[]
null
[ "Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric", "Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 " ]
2020-08-19T21:29:15
2022-06-02T16:41:01
2020-08-19T22:04:40
CONTRIBUTOR
null
null
null
The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu. Wasn't happening on 0.4.0 but happening now on master. ``` File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute self.add_batch(predictions=predictions, references=references) ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/519/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/519/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/517
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/517/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/517/comments
https://api.github.com/repos/huggingface/datasets/issues/517/events
https://github.com/huggingface/datasets/issues/517
681,896,944
MDU6SXNzdWU2ODE4OTY5NDQ=
517
add MLDoc dataset
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Any updates on this?", "This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies." ]
2020-08-19T14:41:59
2021-08-03T05:59:33
null
CONTRIBUTOR
null
null
null
Hi, I am recommending that someone add MLDoc, a multilingual news topic classification dataset. - Here's a link to the Github: https://github.com/facebookresearch/MLDoc - and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf Looks like the dataset contains news stories in multiple languages...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/517/reactions", "total_count": 4, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/517/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/514
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/514/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/514/comments
https://api.github.com/repos/huggingface/datasets/issues/514/events
https://github.com/huggingface/datasets/issues/514
681,256,348
MDU6SXNzdWU2ODEyNTYzNDg=
514
dataset.shuffle(keep_in_memory=True) is never allowed
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegara...
[ { "id": 1935892877, "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue", "name": "good first issue", "color": "7057ff", "default": true, "description": "Good for newcomers" }, { "id": 4614514401, "node_...
closed
false
null
[]
null
[ "This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ", "Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_me...
2020-08-18T18:47:40
2022-10-10T12:21:58
2022-10-10T12:21:58
CONTRIBUTOR
null
null
null
As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)` The commit added the lines ```python # lines 994-996 in src/nlp/arrow_dataset.py assert ( not keep_in_memory or cache_file_name is None ), "Please use either...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/514/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/514/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/511
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/511/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/511/comments
https://api.github.com/repos/huggingface/datasets/issues/511/events
https://github.com/huggingface/datasets/issues/511
681,055,553
MDU6SXNzdWU2ODEwNTU1NTM=
511
dataset.shuffle() and select() resets format. Intended?
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegara...
[]
closed
false
null
[]
null
[ "Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table a...
2020-08-18T13:46:01
2020-09-14T08:45:38
2020-09-14T08:45:38
CONTRIBUTOR
null
null
null
Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight? When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/511/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/511/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/510
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/510/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/510/comments
https://api.github.com/repos/huggingface/datasets/issues/510/events
https://github.com/huggingface/datasets/issues/510
680,823,644
MDU6SXNzdWU2ODA4MjM2NDQ=
510
Version of numpy to use the library
{ "login": "isspek", "id": 6966175, "node_id": "MDQ6VXNlcjY5NjYxNzU=", "avatar_url": "https://avatars.githubusercontent.com/u/6966175?v=4", "gravatar_id": "", "url": "https://api.github.com/users/isspek", "html_url": "https://github.com/isspek", "followers_url": "https://api.github.com/users/isspek/foll...
[]
closed
false
null
[]
null
[ "Seems like this method was added in 1.17. I'll add a requirement on this.", "Thank you so much. After upgrading the numpy library, it worked." ]
2020-08-18T08:59:13
2020-08-19T18:35:56
2020-08-19T18:35:56
NONE
null
null
null
Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library. Th...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/510/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/510/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/509
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/509/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/509/comments
https://api.github.com/repos/huggingface/datasets/issues/509/events
https://github.com/huggingface/datasets/issues/509
679,711,585
MDU6SXNzdWU2Nzk3MTE1ODU=
509
Converting TensorFlow dataset example
{ "login": "saareliad", "id": 22762845, "node_id": "MDQ6VXNlcjIyNzYyODQ1", "avatar_url": "https://avatars.githubusercontent.com/u/22762845?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saareliad", "html_url": "https://github.com/saareliad", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it w...
2020-08-16T08:05:20
2021-08-03T06:01:18
2021-08-03T06:01:17
NONE
null
null
null
Hi, I want to use TensorFlow datasets with this repo, I noticed you made some conversion script, can you give a simple example of using it? Thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/509/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/509/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/508
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/508/comments
https://api.github.com/repos/huggingface/datasets/issues/508/events
https://github.com/huggingface/datasets/issues/508
679,705,734
MDU6SXNzdWU2Nzk3MDU3MzQ=
508
TypeError: Receiver() takes no arguments
{ "login": "sebastiantomac", "id": 1225851, "node_id": "MDQ6VXNlcjEyMjU4NTE=", "avatar_url": "https://avatars.githubusercontent.com/u/1225851?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sebastiantomac", "html_url": "https://github.com/sebastiantomac", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[ "Which version of Apache Beam do you have (can you copy your full environment info here)?", "apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ", "Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a du...
2020-08-16T07:18:16
2020-09-01T14:53:33
2020-09-01T14:49:03
NONE
null
null
null
I am trying to load a wikipedia data set ``` import nlp from nlp import load_dataset dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner') #dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner') ``` Th...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/508/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/507
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/507/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/507/comments
https://api.github.com/repos/huggingface/datasets/issues/507/events
https://github.com/huggingface/datasets/issues/507
679,400,683
MDU6SXNzdWU2Nzk0MDA2ODM=
507
Errors when I use
{ "login": "mchari", "id": 30506151, "node_id": "MDQ6VXNlcjMwNTA2MTUx", "avatar_url": "https://avatars.githubusercontent.com/u/30506151?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mchari", "html_url": "https://github.com/mchari", "followers_url": "https://api.github.com/users/mchari/fo...
[]
closed
false
null
[]
null
[ "Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers." ]
2020-08-14T21:03:57
2020-08-14T21:39:10
2020-08-14T21:39:10
NONE
null
null
null
I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors I am using **transformers 3.0.2** code . from transformers.pipelines import pipeline from transformers.modeling_auto import AutoModelForQuestionAnswering from transformers.tokenization_auto import AutoToke...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/507/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/507/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/501
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/501/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/501/comments
https://api.github.com/repos/huggingface/datasets/issues/501/events
https://github.com/huggingface/datasets/issues/501
677,952,893
MDU6SXNzdWU2Nzc5NTI4OTM=
501
Caching doesn't work for map (non-deterministic)
{ "login": "wulu473", "id": 8149933, "node_id": "MDQ6VXNlcjgxNDk5MzM=", "avatar_url": "https://avatars.githubusercontent.com/u/8149933?v=4", "gravatar_id": "", "url": "https://api.github.com/users/wulu473", "html_url": "https://github.com/wulu473", "followers_url": "https://api.github.com/users/wulu473/...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Thanks for reporting !\r\n\r\nTo store the cache file, we compute a hash of the function given in `.map`, using our own hashing function.\r\nThe hash doesn't seem to stay the same over sessions for the tokenizer.\r\nApparently this is because of the regex at `tokenizer.pat` is not well supported by our hashing fun...
2020-08-12T20:20:07
2022-08-08T11:02:23
2020-08-24T16:34:35
NONE
null
null
null
The caching functionality doesn't work reliably when tokenizing a dataset. Here's a small example to reproduce it. ```python import nlp import transformers def main(): ds = nlp.load_dataset("reddit", split="train[:500]") tokenizer = transformers.AutoTokenizer.from_pretrained("gpt2") def conv...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/501/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/501/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/492
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/492/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/492/comments
https://api.github.com/repos/huggingface/datasets/issues/492/events
https://github.com/huggingface/datasets/issues/492
676,495,064
MDU6SXNzdWU2NzY0OTUwNjQ=
492
nlp.Features does not distinguish between nullable and non-nullable types in PyArrow schema
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[ "In 0.4.0, the assertion in `concatenate_datasets ` is on the features, and not the schema.\r\nCould you try to update `nlp` ?\r\n\r\nAlso, since 0.4.0, you can use `dset_wikipedia.cast_(dset_books.features)` to avoid the schema cast hack.", "Or maybe the assertion comes from elsewhere ?", "I'm using the master...
2020-08-11T00:27:46
2020-08-26T16:17:19
2020-08-26T16:17:19
CONTRIBUTOR
null
null
null
Here's the code I'm trying to run: ```python dset_wikipedia = nlp.load_dataset("wikipedia", "20200501.en", split="train", cache_dir=args.cache_dir) dset_wikipedia.drop(columns=["title"]) dset_wikipedia.features.pop("title") dset_books = nlp.load_dataset("bookcorpus", split="train", cache_dir=args.cache_dir) dse...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/492/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/492/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/491/comments
https://api.github.com/repos/huggingface/datasets/issues/491/events
https://github.com/huggingface/datasets/issues/491
676,486,275
MDU6SXNzdWU2NzY0ODYyNzU=
491
No 0.4.0 release on GitHub
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[ "I did the release on github, and updated the doc :)\r\nSorry for the delay", "Thanks!" ]
2020-08-10T23:59:57
2020-08-11T16:50:07
2020-08-11T16:50:07
CONTRIBUTOR
null
null
null
0.4.0 was released on PyPi, but not on GitHub. This means [the documentation](https://huggingface.co/nlp/) is still displaying from 0.3.0, and that there's no tag to easily clone the 0.4.0 version of the repo.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/491/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/491/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/490
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/490/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/490/comments
https://api.github.com/repos/huggingface/datasets/issues/490/events
https://github.com/huggingface/datasets/issues/490
676,482,242
MDU6SXNzdWU2NzY0ODIyNDI=
490
Loading preprocessed Wikipedia dataset requires apache_beam
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[]
2020-08-10T23:46:50
2020-08-14T13:17:20
2020-08-14T13:17:20
CONTRIBUTOR
null
null
null
Running `nlp.load_dataset("wikipedia", "20200501.en", split="train", dir="/tmp/wikipedia")` gives an error if apache_beam is not installed, stemming from https://github.com/huggingface/nlp/blob/38eb2413de54ee804b0be81781bd65ac4a748ced/src/nlp/builder.py#L981-L988 This succeeded without the dependency in ve...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/490/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/490/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/489
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/489/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/489/comments
https://api.github.com/repos/huggingface/datasets/issues/489/events
https://github.com/huggingface/datasets/issues/489
676,456,257
MDU6SXNzdWU2NzY0NTYyNTc=
489
ug
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[ "whoops", "please delete this" ]
2020-08-10T22:33:03
2020-08-10T22:55:14
2020-08-10T22:33:40
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/489/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/489/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/488
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/488/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/488/comments
https://api.github.com/repos/huggingface/datasets/issues/488/events
https://github.com/huggingface/datasets/issues/488
676,299,993
MDU6SXNzdWU2NzYyOTk5OTM=
488
issues with downloading datasets for wmt16 and wmt19
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[]
closed
false
null
[]
null
[ "I found `UNv1.0.en-ru.tar.gz` here: https://conferences.unite.un.org/uncorpus/en/downloadoverview, so it can be reconstructed with:\r\n```\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar.gz.00\r\nwget -c https://stuncorpusprod.blob.core.windows.net/corpusfiles/UNv1.0.en-ru.tar....
2020-08-10T17:32:51
2022-10-04T17:46:59
2022-10-04T17:46:58
CONTRIBUTOR
null
null
null
I have encountered multiple issues while trying to: ``` import nlp dataset = nlp.load_dataset('wmt16', 'ru-en') metric = nlp.load_metric('wmt16') ``` 1. I had to do `pip install -e ".[dev]" ` on master, currently released nlp didn't work (sorry, didn't save the error) - I went back to the released version and no...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/488/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/488/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/486
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/486/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/486/comments
https://api.github.com/repos/huggingface/datasets/issues/486/events
https://github.com/huggingface/datasets/issues/486
675,649,034
MDU6SXNzdWU2NzU2NDkwMzQ=
486
Bookcorpus data contains pretokenized text
{ "login": "orsharir", "id": 99543, "node_id": "MDQ6VXNlcjk5NTQz", "avatar_url": "https://avatars.githubusercontent.com/u/99543?v=4", "gravatar_id": "", "url": "https://api.github.com/users/orsharir", "html_url": "https://github.com/orsharir", "followers_url": "https://api.github.com/users/orsharir/foll...
[]
closed
false
null
[]
null
[ "Yes indeed it looks like some `'` and spaces are missing (for example in `dont` or `didnt`).\r\nDo you know if there exist some copies without this issue ?\r\nHow would you fix this issue on the current data exactly ? I can see that the data is raw text (not tokenized) so I'm not sure I understand how you would do...
2020-08-09T06:53:24
2022-10-04T17:44:33
2022-10-04T17:44:33
CONTRIBUTOR
null
null
null
It seem that the bookcoprus data downloaded through the library was pretokenized with NLTK's Treebank tokenizer, which changes the text in incompatible ways to how, for instance, BERT's wordpiece tokenizer works. For example, "didn't" becomes "did" + "n't", and double quotes are changed to `` and '' for start and end q...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/486/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/486/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/485/comments
https://api.github.com/repos/huggingface/datasets/issues/485/events
https://github.com/huggingface/datasets/issues/485
675,595,393
MDU6SXNzdWU2NzU1OTUzOTM=
485
PAWS dataset first item is header
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
2020-08-08T22:05:25
2020-08-19T09:50:01
2020-08-19T09:50:01
CONTRIBUTOR
null
null
null
``` import nlp dataset = nlp.load_dataset('xtreme', 'PAWS-X.en') dataset['test'][0] ``` prints the following ``` {'label': 'label', 'sentence1': 'sentence1', 'sentence2': 'sentence2'} ``` dataset['test'][0] should probably be the first item in the dataset, not just a dictionary mapping the column names t...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/485/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/485/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/483
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/483/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/483/comments
https://api.github.com/repos/huggingface/datasets/issues/483/events
https://github.com/huggingface/datasets/issues/483
675,080,694
MDU6SXNzdWU2NzUwODA2OTQ=
483
rotten tomatoes movie review dataset taken down
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "found a mirror: https://storage.googleapis.com/seldon-datasets/sentence_polarity_v1/rt-polaritydata.tar.gz", "fixed in #484 ", "Closing this one. Thanks again @jxmorris12 for taking care of this :)" ]
2020-08-07T15:12:01
2020-09-08T09:36:34
2020-09-08T09:36:33
CONTRIBUTOR
null
null
null
In an interesting twist of events, the individual who created the movie review seems to have left Cornell, and their webpage has been removed, along with the movie review dataset (http://www.cs.cornell.edu/people/pabo/movie-review-data/rt-polaritydata.tar.gz). It's not downloadable anymore.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/483/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/483/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/482
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/482/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/482/comments
https://api.github.com/repos/huggingface/datasets/issues/482/events
https://github.com/huggingface/datasets/issues/482
674,851,147
MDU6SXNzdWU2NzQ4NTExNDc=
482
Bugs : dataset.map() is frozen on ELI5
{ "login": "ratthachat", "id": 56621342, "node_id": "MDQ6VXNlcjU2NjIxMzQy", "avatar_url": "https://avatars.githubusercontent.com/u/56621342?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ratthachat", "html_url": "https://github.com/ratthachat", "followers_url": "https://api.github.com/use...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "This comes from an overflow in pyarrow's array.\r\nIt is stuck inside the loop that reduces the batch size to avoid the overflow.\r\nI'll take a look", "I created a PR to fix the issue.\r\nIt was due to an overflow check that handled badly an empty list.\r\n\r\nYou can try the changes by using \r\n```\r\n!pip in...
2020-08-07T08:23:35
2023-04-06T09:39:59
2020-08-11T23:55:15
NONE
null
null
null
Hi Huggingface Team! Thank you guys once again for this amazing repo. I have tried to prepare ELI5 to train with T5, based on [this wonderful notebook of Suraj Patil](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb) However, when I run `dataset.map()` on ELI5 to prepare `input_text, ta...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/482/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/482/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/478
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/478/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/478/comments
https://api.github.com/repos/huggingface/datasets/issues/478/events
https://github.com/huggingface/datasets/issues/478
673,178,317
MDU6SXNzdWU2NzMxNzgzMTc=
478
Export TFRecord to GCP bucket
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
null
[]
null
[ "Nevermind, I restarted my python session and it worked fine...\r\n\r\n---\r\n\r\nI had an authentification error, and I authenticated from another terminal. After that, no more error but it was not working. Restarting the sessions makes it work :)" ]
2020-08-05T01:08:32
2020-08-05T01:21:37
2020-08-05T01:21:36
NONE
null
null
null
Previously, I was writing TFRecords manually to GCP bucket with : `with tf.io.TFRecordWriter('gs://my_bucket/x.tfrecord')` Since `0.4.0` is out with the `export()` function, I tried it. But it seems TFRecords cannot be directly written to GCP bucket. `dataset.export('local.tfrecord')` works fine, but `dataset....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/478/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/478/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/477/comments
https://api.github.com/repos/huggingface/datasets/issues/477/events
https://github.com/huggingface/datasets/issues/477
673,142,143
MDU6SXNzdWU2NzMxNDIxNDM=
477
Overview.ipynb throws exceptions with nlp 0.4.0
{ "login": "mandy-li", "id": 23109219, "node_id": "MDQ6VXNlcjIzMTA5MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23109219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mandy-li", "html_url": "https://github.com/mandy-li", "followers_url": "https://api.github.com/users/man...
[]
closed
false
null
[]
null
[ "Thanks for reporting this issue\r\n\r\nThere was a bug where numpy arrays would get returned instead of tensorflow tensors.\r\nThis is fixed on master.\r\n\r\nI tried to re-run the colab and encountered this error instead:\r\n\r\n```\r\nAttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no at...
2020-08-04T23:18:15
2021-08-03T06:02:15
2021-08-03T06:02:15
NONE
null
null
null
with nlp 0.4.0, the TensorFlow example in Overview.ipynb throws the following exceptions: --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-5-48907f2ad433> in <module> ----> 1 features = {x: trai...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/477/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/477/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/474
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/474/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/474/comments
https://api.github.com/repos/huggingface/datasets/issues/474/events
https://github.com/huggingface/datasets/issues/474
672,407,330
MDU6SXNzdWU2NzI0MDczMzA=
474
test_load_real_dataset when config has BUILDER_CONFIGS that matter
{ "login": "marcotcr", "id": 698010, "node_id": "MDQ6VXNlcjY5ODAxMA==", "avatar_url": "https://avatars.githubusercontent.com/u/698010?v=4", "gravatar_id": "", "url": "https://api.github.com/users/marcotcr", "html_url": "https://github.com/marcotcr", "followers_url": "https://api.github.com/users/marcotc...
[]
closed
false
null
[]
null
[ "The `data_dir` parameter has been removed. Now the error is `ValueError: Config name is missing`\r\n\r\nAs mentioned in #470 I think we can have one test with the first config of BUILDER_CONFIGS, and another test that runs all of the configs in BUILDER_CONFIGS", "This was fixed in #527 \r\n\r\nClosing this one, ...
2020-08-03T23:46:36
2020-09-07T14:53:13
2020-09-07T14:53:13
NONE
null
null
null
It a dataset has custom `BUILDER_CONFIGS` with non-keyword arguments (or keyword arguments with non default values), the config is not loaded during the test and causes an error. I think the problem is that `test_load_real_dataset` calls `load_dataset` with `data_dir=temp_data_dir` ([here](https://github.com/huggingfa...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/474/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/474/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/469
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/469/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/469/comments
https://api.github.com/repos/huggingface/datasets/issues/469/events
https://github.com/huggingface/datasets/issues/469
671,876,963
MDU6SXNzdWU2NzE4NzY5NjM=
469
invalid data type 'str' at _convert_outputs in arrow_dataset.py
{ "login": "Murgates", "id": 30617486, "node_id": "MDQ6VXNlcjMwNjE3NDg2", "avatar_url": "https://avatars.githubusercontent.com/u/30617486?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Murgates", "html_url": "https://github.com/Murgates", "followers_url": "https://api.github.com/users/Mur...
[]
closed
false
null
[]
null
[ "Hi ! Did you try to set the output format to pytorch ? (or tensorflow if you're using tensorflow)\r\nIt can be done with `dataset.set_format(\"torch\", columns=columns)` (or \"tensorflow\").\r\n\r\nNote that for pytorch, string columns can't be converted to `torch.Tensor`, so you have to specify in `columns=` the...
2020-08-03T07:48:29
2023-07-20T15:54:17
2023-07-20T15:54:17
NONE
null
null
null
I trying to build multi label text classifier model using Transformers lib. I'm using Transformers NLP to load the data set, while calling trainer.train() method. It throws the following error File "C:\***\arrow_dataset.py", line 343, in _convert_outputs v = command(v) TypeError: new(): invalid data type ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/469/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/469/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/468
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/468/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/468/comments
https://api.github.com/repos/huggingface/datasets/issues/468/events
https://github.com/huggingface/datasets/issues/468
671,622,441
MDU6SXNzdWU2NzE2MjI0NDE=
468
UnicodeDecodeError while loading PAN-X task of XTREME dataset
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/fo...
[]
closed
false
null
[]
null
[ "Indeed. Solution 1 is the simplest.\r\n\r\nThis is actually a recurring problem.\r\nI think we should scan all the datasets with regexpr to fix the use of `open()` without encodings.\r\nAnd probably add a test in the CI to forbid using this in the future.", "I'm happy to tackle the broader problem - will open a ...
2020-08-02T14:05:10
2020-08-20T08:16:08
2020-08-20T08:16:08
MEMBER
null
null
null
Hi 🤗 team! ## Description of the problem I'm running into a `UnicodeDecodeError` while trying to load the PAN-X subset the XTREME dataset: ``` --------------------------------------------------------------------------- UnicodeDecodeError Traceback (most recent call last) <ipython-inp...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/468/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/468/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/445
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/445/comments
https://api.github.com/repos/huggingface/datasets/issues/445/events
https://github.com/huggingface/datasets/issues/445
666,836,658
MDU6SXNzdWU2NjY4MzY2NTg=
445
DEFAULT_TOKENIZER import error in sacrebleu
{ "login": "idoh", "id": 5303103, "node_id": "MDQ6VXNlcjUzMDMxMDM=", "avatar_url": "https://avatars.githubusercontent.com/u/5303103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/idoh", "html_url": "https://github.com/idoh", "followers_url": "https://api.github.com/users/idoh/followers", ...
[]
closed
false
null
[]
null
[ "This issue was resolved by #447 " ]
2020-07-28T07:31:30
2020-07-28T12:58:56
2020-07-28T12:58:56
CONTRIBUTOR
null
null
null
Latest Version 0.3.0 When loading the metric "sacrebleu" there is an import error due to the wrong path ![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/445/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/444
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/444/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/444/comments
https://api.github.com/repos/huggingface/datasets/issues/444/events
https://github.com/huggingface/datasets/issues/444
666,280,842
MDU6SXNzdWU2NjYyODA4NDI=
444
Keep loading old file even I specify a new file in load_dataset
{ "login": "joshhu", "id": 10594453, "node_id": "MDQ6VXNlcjEwNTk0NDUz", "avatar_url": "https://avatars.githubusercontent.com/u/10594453?v=4", "gravatar_id": "", "url": "https://api.github.com/users/joshhu", "html_url": "https://github.com/joshhu", "followers_url": "https://api.github.com/users/joshhu/fo...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Same here !", "This is the only fix I could come up with without touching the repo's code.\r\n```python\r\nfrom nlp.builder import FORCE_REDOWNLOAD\r\ndataset = load_dataset('csv', data_file='./a.csv', download_mode=FORCE_REDOWNLOAD, version='0.0.1')\r\n```\r\nYou'll have to change the version each time you want...
2020-07-27T13:08:06
2020-07-29T13:57:22
2020-07-29T13:57:22
NONE
null
null
null
I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset seems to remain the old 'a.csv' and not loading new csv file. Even...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/444/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/444/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/443
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/443/comments
https://api.github.com/repos/huggingface/datasets/issues/443/events
https://github.com/huggingface/datasets/issues/443
666,246,716
MDU6SXNzdWU2NjYyNDY3MTY=
443
Cannot unpickle saved .pt dataset with torch.save()/load()
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegara...
[]
closed
false
null
[]
null
[ "This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https://github.com/huggingface/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. " ]
2020-07-27T12:13:37
2020-07-27T13:05:11
2020-07-27T13:05:11
CONTRIBUTOR
null
null
null
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/443/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/442
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/442/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/442/comments
https://api.github.com/repos/huggingface/datasets/issues/442/events
https://github.com/huggingface/datasets/issues/442
666,201,810
MDU6SXNzdWU2NjYyMDE4MTA=
442
[Suggestion] Glue Diagnostic Data with Labels
{ "login": "ggbetz", "id": 3662782, "node_id": "MDQ6VXNlcjM2NjI3ODI=", "avatar_url": "https://avatars.githubusercontent.com/u/3662782?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ggbetz", "html_url": "https://github.com/ggbetz", "followers_url": "https://api.github.com/users/ggbetz/foll...
[ { "id": 2067401494, "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion", "name": "Dataset discussion", "color": "72f99f", "default": false, "description": "Discussions on the datasets" } ]
open
false
null
[]
null
[]
2020-07-27T10:59:58
2020-08-24T15:13:20
null
NONE
null
null
null
Hello! First of all, thanks for setting up this useful project! I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set. Yet, the data with labels is available, too (see als...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/442/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/442/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/439/comments
https://api.github.com/repos/huggingface/datasets/issues/439/events
https://github.com/huggingface/datasets/issues/439
665,964,673
MDU6SXNzdWU2NjU5NjQ2NzM=
439
Issues: Adding a FAISS or Elastic Search index to a Dataset
{ "login": "nsankar", "id": 431890, "node_id": "MDQ6VXNlcjQzMTg5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nsankar", "html_url": "https://github.com/nsankar", "followers_url": "https://api.github.com/users/nsankar/fo...
[]
closed
false
null
[]
null
[ "`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html)....
2020-07-27T04:25:17
2020-10-28T01:46:24
2020-10-28T01:46:24
NONE
null
null
null
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/439/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/438
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/438/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/438/comments
https://api.github.com/repos/huggingface/datasets/issues/438/events
https://github.com/huggingface/datasets/issues/438
665,865,490
MDU6SXNzdWU2NjU4NjU0OTA=
438
New Datasets: IWSLT15+, ITTB
{ "login": "sshleifer", "id": 6045025, "node_id": "MDQ6VXNlcjYwNDUwMjU=", "avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sshleifer", "html_url": "https://github.com/sshleifer", "followers_url": "https://api.github.com/users/ss...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
open
false
null
[]
null
[ "Thanks Sam, we now have a very detailed tutorial and template on how to add a new dataset to the library. It typically take 1-2 hours to add one. Do you want to give it a try ?\r\nThe tutorial on writing a new dataset loading script is here: https://huggingface.co/nlp/add_dataset.html\r\nAnd the part on how to sha...
2020-07-26T21:43:04
2020-08-24T15:12:15
null
CONTRIBUTOR
null
null
null
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/60450...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/438/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/438/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/436
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/436/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/436/comments
https://api.github.com/repos/huggingface/datasets/issues/436/events
https://github.com/huggingface/datasets/issues/436
665,582,167
MDU6SXNzdWU2NjU1ODIxNjc=
436
Google Colab - load_dataset - PyArrow exception
{ "login": "nsankar", "id": 431890, "node_id": "MDQ6VXNlcjQzMTg5MA==", "avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nsankar", "html_url": "https://github.com/nsankar", "followers_url": "https://api.github.com/users/nsankar/fo...
[]
closed
false
null
[]
null
[ "Indeed, we’ll make a new PyPi release next week to solve this. Cc @lhoestq ", "+1! this is the reason our tests are failing at [TextAttack](https://github.com/QData/TextAttack) \r\n\r\n(Though it's worth noting if we fixed the version number of pyarrow to 0.16.0 that would fix our problem too. But in this case w...
2020-07-25T13:05:20
2020-08-20T08:08:18
2020-08-20T08:08:18
NONE
null
null
null
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/436/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/436/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/435
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/435/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/435/comments
https://api.github.com/repos/huggingface/datasets/issues/435/events
https://github.com/huggingface/datasets/issues/435
665,507,141
MDU6SXNzdWU2NjU1MDcxNDE=
435
ImportWarning for pyarrow 1.0.0
{ "login": "HanGuo97", "id": 18187806, "node_id": "MDQ6VXNlcjE4MTg3ODA2", "avatar_url": "https://avatars.githubusercontent.com/u/18187806?v=4", "gravatar_id": "", "url": "https://api.github.com/users/HanGuo97", "html_url": "https://github.com/HanGuo97", "followers_url": "https://api.github.com/users/Han...
[]
closed
false
null
[]
null
[ "This was fixed in #434 \r\nWe'll do a release later this week to include this fix.\r\nThanks for reporting", "I dont know if the fix was made but the problem is still present : \r\nInstaled with pip : NLP 0.3.0 // pyarrow 1.0.0 \r\nOS : archlinux with kernel zen 5.8.5", "Yes it was fixed in `nlp>=0.4.0`\r\nYou...
2020-07-25T03:44:39
2020-09-08T17:57:15
2020-08-03T16:37:32
NONE
null
null
null
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/435/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/435/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/433
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/433/comments
https://api.github.com/repos/huggingface/datasets/issues/433/events
https://github.com/huggingface/datasets/issues/433
665,311,025
MDU6SXNzdWU2NjUzMTEwMjU=
433
How to reuse functionality of a (generic) dataset?
{ "login": "ArneBinder", "id": 3375489, "node_id": "MDQ6VXNlcjMzNzU0ODk=", "avatar_url": "https://avatars.githubusercontent.com/u/3375489?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ArneBinder", "html_url": "https://github.com/ArneBinder", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/...
2020-07-24T17:27:37
2022-10-04T17:59:34
2022-10-04T17:59:33
NONE
null
null
null
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/433/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/426
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/426/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/426/comments
https://api.github.com/repos/huggingface/datasets/issues/426/events
https://github.com/huggingface/datasets/issues/426
664,203,897
MDU6SXNzdWU2NjQyMDM4OTc=
426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.g...
[ { "id": 1935892871, "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement", "name": "enhancement", "color": "a2eeef", "default": true, "description": "New feature or request" } ]
closed
false
null
[]
null
[ "Yes that's definitely something we plan to add ^^", "Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.", "So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob...
2020-07-23T05:00:41
2021-03-12T09:34:12
2020-09-07T14:48:04
NONE
null
null
null
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/426/reactions", "total_count": 3, "+1": 3, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/426/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/425
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/425/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/425/comments
https://api.github.com/repos/huggingface/datasets/issues/425/events
https://github.com/huggingface/datasets/issues/425
664,029,848
MDU6SXNzdWU2NjQwMjk4NDg=
425
Correct data structure for PAN-X task in XTREME dataset?
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/fo...
[]
closed
false
null
[]
null
[ "Thanks for noticing ! This looks more reasonable indeed.\r\nFeel free to open a PR", "Hi @lhoestq \r\nI made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `\"features\"` part of the PAN-X.LANG dataset:\r\n\...
2020-07-22T20:29:20
2020-08-02T13:30:34
2020-08-02T13:30:34
MEMBER
null
null
null
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/425/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/425/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/418
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/418/comments
https://api.github.com/repos/huggingface/datasets/issues/418/events
https://github.com/huggingface/datasets/issues/418
661,914,873
MDU6SXNzdWU2NjE5MTQ4NzM=
418
Addition of google drive links to dl_manager
{ "login": "lordtt13", "id": 35500534, "node_id": "MDQ6VXNlcjM1NTAwNTM0", "avatar_url": "https://avatars.githubusercontent.com/u/35500534?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lordtt13", "html_url": "https://github.com/lordtt13", "followers_url": "https://api.github.com/users/lor...
[]
closed
false
null
[]
null
[ "I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ", "Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`...
2020-07-20T14:52:02
2020-07-20T15:39:32
2020-07-20T15:39:32
CONTRIBUTOR
null
null
null
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/418/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/415
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/415/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/415/comments
https://api.github.com/repos/huggingface/datasets/issues/415/events
https://github.com/huggingface/datasets/issues/415
660,687,076
MDU6SXNzdWU2NjA2ODcwNzY=
415
Something is wrong with WMT 19 kk-en dataset
{ "login": "ChenghaoMou", "id": 32014649, "node_id": "MDQ6VXNlcjMyMDE0NjQ5", "avatar_url": "https://avatars.githubusercontent.com/u/32014649?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChenghaoMou", "html_url": "https://github.com/ChenghaoMou", "followers_url": "https://api.github.com/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
open
false
null
[]
null
[]
2020-07-19T08:18:51
2020-07-20T09:54:26
null
NONE
null
null
null
The translation in the `train` set does not look right: ``` >>>import nlp >>>from nlp import load_dataset >>>dataset = load_dataset('wmt19', 'kk-en') >>>dataset["train"]["translation"][0] {'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'} >>>dataset["validation"]["translation"][0] {'kk': 'Ақша-несие...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/415/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/415/timeline
null
null
https://api.github.com/repos/huggingface/datasets/issues/414
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/414/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/414/comments
https://api.github.com/repos/huggingface/datasets/issues/414/events
https://github.com/huggingface/datasets/issues/414
660,654,013
MDU6SXNzdWU2NjA2NTQwMTM=
414
from_dict delete?
{ "login": "hackerxiaobai", "id": 22817243, "node_id": "MDQ6VXNlcjIyODE3MjQz", "avatar_url": "https://avatars.githubusercontent.com/u/22817243?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hackerxiaobai", "html_url": "https://github.com/hackerxiaobai", "followers_url": "https://api.githu...
[]
closed
false
null
[]
null
[ "`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though.\r\nRight now if you want to use `from_dict` you have to install the package from the master branch\r\n```\r\npip install git+https://github.com/...
2020-07-19T07:08:36
2020-07-21T02:21:17
2020-07-21T02:21:17
NONE
null
null
null
AttributeError: type object 'Dataset' has no attribute 'from_dict'
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/414/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/414/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/413
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/413/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/413/comments
https://api.github.com/repos/huggingface/datasets/issues/413/events
https://github.com/huggingface/datasets/issues/413
660,063,655
MDU6SXNzdWU2NjAwNjM2NTU=
413
Is there a way to download only NQ dev?
{ "login": "tholor", "id": 1563902, "node_id": "MDQ6VXNlcjE1NjM5MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/1563902?v=4", "gravatar_id": "", "url": "https://api.github.com/users/tholor", "html_url": "https://github.com/tholor", "followers_url": "https://api.github.com/users/tholor/foll...
[]
closed
false
null
[]
null
[ "Unfortunately it's not possible to download only the dev set of NQ.\r\n\r\nI think we could add a way to download only the test set by adding a custom configuration to the processing script though.", "Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially al...
2020-07-18T10:28:23
2022-02-11T09:50:21
2022-02-11T09:50:21
NONE
null
null
null
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/413/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/413/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/412
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/412/comments
https://api.github.com/repos/huggingface/datasets/issues/412/events
https://github.com/huggingface/datasets/issues/412
660,047,139
MDU6SXNzdWU2NjAwNDcxMzk=
412
Unable to load XTREME dataset from disk
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/fo...
[]
closed
false
null
[]
null
[ "Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`", "I was able to repro. Opening a PR to fix that.\r\nThanks for reporting this issue !", "Thanks for the rapid fix @lhoestq!" ]
2020-07-18T09:55:00
2020-07-21T08:15:44
2020-07-21T08:15:44
MEMBER
null
null
null
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/412/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/412/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/409
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/409/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/409/comments
https://api.github.com/repos/huggingface/datasets/issues/409/events
https://github.com/huggingface/datasets/issues/409
659,128,611
MDU6SXNzdWU2NTkxMjg2MTE=
409
train_test_split error: 'dict' object has no attribute 'deepcopy'
{ "login": "morganmcg1", "id": 20516801, "node_id": "MDQ6VXNlcjIwNTE2ODAx", "avatar_url": "https://avatars.githubusercontent.com/u/20516801?v=4", "gravatar_id": "", "url": "https://api.github.com/users/morganmcg1", "html_url": "https://github.com/morganmcg1", "followers_url": "https://api.github.com/use...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "It was fixed in 2ddd18d139d3047c9c3abe96e1e7d05bb360132c.\r\nCould you pull the latest changes from master @morganmcg1 ?", "Thanks @lhoestq, works fine now!" ]
2020-07-17T10:36:28
2020-07-21T14:34:52
2020-07-21T14:34:52
NONE
null
null
null
`train_test_split` is giving me an error when I try and call it: `'dict' object has no attribute 'deepcopy'` ## To reproduce ``` dataset = load_dataset('glue', 'mrpc', split='train') dataset = dataset.train_test_split(test_size=0.2) ``` ## Full Stacktrace ``` -------------------------------------------...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/409/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/409/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/407
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/407/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/407/comments
https://api.github.com/repos/huggingface/datasets/issues/407/events
https://github.com/huggingface/datasets/issues/407
658,672,736
MDU6SXNzdWU2NTg2NzI3MzY=
407
MissingBeamOptions for Wikipedia 20200501.en
{ "login": "mitchellgordon95", "id": 7490438, "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mitchellgordon95", "html_url": "https://github.com/mitchellgordon95", "followers_url": "https://ap...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[ "Fixed. Could you try again @mitchellgordon95 ?\r\nIt was due a file not being updated on S3.\r\n\r\nWe need to make sure all the datasets scripts get updated properly @julien-c ", "Works for me! Thanks.", "I found the same issue with almost any language other than English. (For English, it works). Will someone...
2020-07-16T23:48:03
2021-01-12T11:41:16
2020-07-17T14:24:28
CONTRIBUTOR
null
null
null
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/407/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/407/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/406
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/406/comments
https://api.github.com/repos/huggingface/datasets/issues/406/events
https://github.com/huggingface/datasets/issues/406
658,581,764
MDU6SXNzdWU2NTg1ODE3NjQ=
406
Faster Shuffling?
{ "login": "mitchellgordon95", "id": 7490438, "node_id": "MDQ6VXNlcjc0OTA0Mzg=", "avatar_url": "https://avatars.githubusercontent.com/u/7490438?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mitchellgordon95", "html_url": "https://github.com/mitchellgordon95", "followers_url": "https://ap...
[]
closed
false
null
[]
null
[ "I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?", "> @lhoestq for all the `select...
2020-07-16T21:21:53
2023-08-16T09:52:39
2020-09-07T14:45:25
CONTRIBUTOR
null
null
null
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/406/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/395
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/395/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/395/comments
https://api.github.com/repos/huggingface/datasets/issues/395/events
https://github.com/huggingface/datasets/issues/395
657,454,983
MDU6SXNzdWU2NTc0NTQ5ODM=
395
Memory issue when doing select
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[ { "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.git...
null
[]
2020-07-15T15:43:38
2020-07-16T08:07:31
2020-07-16T08:07:31
MEMBER
null
null
null
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/395/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/395/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/388
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/388/comments
https://api.github.com/repos/huggingface/datasets/issues/388/events
https://github.com/huggingface/datasets/issues/388
656,707,497
MDU6SXNzdWU2NTY3MDc0OTc=
388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
{ "login": "SamuelCahyawijaya", "id": 2826602, "node_id": "MDQ6VXNlcjI4MjY2MDI=", "avatar_url": "https://avatars.githubusercontent.com/u/2826602?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SamuelCahyawijaya", "html_url": "https://github.com/SamuelCahyawijaya", "followers_url": "https:/...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[ { "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "follo...
null
[ "similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDow...
2020-07-14T15:36:41
2022-10-04T18:01:28
2022-10-04T18:01:28
NONE
null
null
null
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not ob...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/388/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/387/comments
https://api.github.com/repos/huggingface/datasets/issues/387/events
https://github.com/huggingface/datasets/issues/387
656,361,357
MDU6SXNzdWU2NTYzNjEzNTc=
387
Conversion through to_pandas output numpy arrays for lists instead of python objects
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[]
closed
false
null
[]
null
[ "To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe...
2020-07-14T06:24:01
2020-07-17T11:37:00
2020-07-17T11:37:00
MEMBER
null
null
null
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting hi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/387/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/387/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/382
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/382/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/382/comments
https://api.github.com/repos/huggingface/datasets/issues/382/events
https://github.com/huggingface/datasets/issues/382
655,290,482
MDU6SXNzdWU2NTUyOTA0ODI=
382
1080
{ "login": "saq194", "id": 60942503, "node_id": "MDQ6VXNlcjYwOTQyNTAz", "avatar_url": "https://avatars.githubusercontent.com/u/60942503?v=4", "gravatar_id": "", "url": "https://api.github.com/users/saq194", "html_url": "https://github.com/saq194", "followers_url": "https://api.github.com/users/saq194/fo...
[]
closed
false
null
[]
null
[]
2020-07-11T22:29:07
2020-07-11T22:49:38
2020-07-11T22:49:38
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/382/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/382/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/381/comments
https://api.github.com/repos/huggingface/datasets/issues/381/events
https://github.com/huggingface/datasets/issues/381
655,277,119
MDU6SXNzdWU2NTUyNzcxMTk=
381
NLp
{ "login": "Spartanthor", "id": 68147610, "node_id": "MDQ6VXNlcjY4MTQ3NjEw", "avatar_url": "https://avatars.githubusercontent.com/u/68147610?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Spartanthor", "html_url": "https://github.com/Spartanthor", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[]
2020-07-11T20:50:14
2020-07-11T20:50:39
2020-07-11T20:50:39
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/381/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/381/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/378
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/378/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/378/comments
https://api.github.com/repos/huggingface/datasets/issues/378/events
https://github.com/huggingface/datasets/issues/378
655,226,316
MDU6SXNzdWU2NTUyMjYzMTY=
378
[dataset] Structure of MLQA seems unecessary nested
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[]
closed
false
null
[]
null
[ "Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?", "You're right, I think we don't need to use the nested dictionary. \r\n" ]
2020-07-11T15:16:08
2020-07-15T16:17:20
2020-07-15T16:17:20
MEMBER
null
null
null
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/378/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/378/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/377
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/377/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/377/comments
https://api.github.com/repos/huggingface/datasets/issues/377/events
https://github.com/huggingface/datasets/issues/377
655,215,790
MDU6SXNzdWU2NTUyMTU3OTA=
377
Iyy!!!
{ "login": "ajinomoh", "id": 68154535, "node_id": "MDQ6VXNlcjY4MTU0NTM1", "avatar_url": "https://avatars.githubusercontent.com/u/68154535?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ajinomoh", "html_url": "https://github.com/ajinomoh", "followers_url": "https://api.github.com/users/aji...
[]
closed
false
null
[]
null
[]
2020-07-11T14:11:07
2020-07-11T14:30:51
2020-07-11T14:30:51
NONE
null
null
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/377/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/377/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/376
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/376/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/376/comments
https://api.github.com/repos/huggingface/datasets/issues/376/events
https://github.com/huggingface/datasets/issues/376
655,047,826
MDU6SXNzdWU2NTUwNDc4MjY=
376
to_pandas conversion doesn't always work
{ "login": "thomwolf", "id": 7353373, "node_id": "MDQ6VXNlcjczNTMzNzM=", "avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomwolf", "html_url": "https://github.com/thomwolf", "followers_url": "https://api.github.com/users/thomw...
[]
closed
false
null
[]
null
[ "**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387", "Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets u...
2020-07-10T21:33:31
2022-10-04T18:05:39
2022-10-04T18:05:39
MEMBER
null
null
null
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0....
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/376/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/376/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/375
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/375/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/375/comments
https://api.github.com/repos/huggingface/datasets/issues/375/events
https://github.com/huggingface/datasets/issues/375
655,023,307
MDU6SXNzdWU2NTUwMjMzMDc=
375
TypeError when computing bertscore
{ "login": "willywsm1013", "id": 13269577, "node_id": "MDQ6VXNlcjEzMjY5NTc3", "avatar_url": "https://avatars.githubusercontent.com/u/13269577?v=4", "gravatar_id": "", "url": "https://api.github.com/users/willywsm1013", "html_url": "https://github.com/willywsm1013", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_siz...
2020-07-10T20:37:44
2022-06-01T15:15:59
2022-06-01T15:15:59
NONE
null
null
null
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most rece...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/375/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/375/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/373
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/373/comments
https://api.github.com/repos/huggingface/datasets/issues/373/events
https://github.com/huggingface/datasets/issues/373
654,845,133
MDU6SXNzdWU2NTQ4NDUxMzM=
373
Segmentation fault when loading local JSON dataset as of #372
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegara...
[]
closed
false
null
[]
null
[ "I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.j...
2020-07-10T15:04:25
2022-10-04T18:05:47
2022-10-04T18:05:47
CONTRIBUTOR
null
null
null
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/373/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/369
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/369/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/369/comments
https://api.github.com/repos/huggingface/datasets/issues/369/events
https://github.com/huggingface/datasets/issues/369
654,186,890
MDU6SXNzdWU2NTQxODY4OTA=
369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
{ "login": "vegarab", "id": 24683907, "node_id": "MDQ6VXNlcjI0NjgzOTA3", "avatar_url": "https://avatars.githubusercontent.com/u/24683907?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vegarab", "html_url": "https://github.com/vegarab", "followers_url": "https://api.github.com/users/vegara...
[ { "id": 2067388877, "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug", "name": "dataset bug", "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library" } ]
closed
false
null
[]
null
[ "I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/", "I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 bu...
2020-07-09T16:16:53
2020-12-15T23:07:22
2020-07-10T14:52:06
CONTRIBUTOR
null
null
null
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/369/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/369/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/368
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/368/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/368/comments
https://api.github.com/repos/huggingface/datasets/issues/368/events
https://github.com/huggingface/datasets/issues/368
654,087,251
MDU6SXNzdWU2NTQwODcyNTE=
368
load_metric can't acquire lock anymore
{ "login": "ydshieh", "id": 2521628, "node_id": "MDQ6VXNlcjI1MjE2Mjg=", "avatar_url": "https://avatars.githubusercontent.com/u/2521628?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ydshieh", "html_url": "https://github.com/ydshieh", "followers_url": "https://api.github.com/users/ydshieh/...
[]
closed
false
null
[]
null
[ "I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a uni...
2020-07-09T14:04:09
2020-07-10T13:45:20
2020-07-10T13:45:20
NONE
null
null
null
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/n...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/368/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/368/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/365
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/365/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/365/comments
https://api.github.com/repos/huggingface/datasets/issues/365/events
https://github.com/huggingface/datasets/issues/365
653,845,964
MDU6SXNzdWU2NTM4NDU5NjQ=
365
How to augment data ?
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
null
[]
null
[ "Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?", "Some samples in the dataset are too long, I want to divide them in several samples.", "Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for aug...
2020-07-09T07:52:37
2020-07-10T09:12:07
2020-07-10T08:22:15
NONE
null
null
null
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/365/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/365/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/362
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/362/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/362/comments
https://api.github.com/repos/huggingface/datasets/issues/362/events
https://github.com/huggingface/datasets/issues/362
653,766,245
MDU6SXNzdWU2NTM3NjYyNDU=
362
[dateset subset missing] xtreme paws-x
{ "login": "jerryIsHere", "id": 50871412, "node_id": "MDQ6VXNlcjUwODcxNDEy", "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jerryIsHere", "html_url": "https://github.com/jerryIsHere", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "You're right, thanks for pointing it out. We will update it " ]
2020-07-09T05:04:54
2020-07-09T12:38:42
2020-07-09T12:38:42
CONTRIBUTOR
null
null
null
I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error It turns out that the subset for Spanish is missing https://github.com/google-research-datasets/paws/tree/master/pawsx
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/362/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/362/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/361
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/361/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/361/comments
https://api.github.com/repos/huggingface/datasets/issues/361/events
https://github.com/huggingface/datasets/issues/361
653,757,376
MDU6SXNzdWU2NTM3NTczNzY=
361
🐛 [Metrics] ROUGE is non-deterministic
{ "login": "astariul", "id": 43774355, "node_id": "MDQ6VXNlcjQzNzc0MzU1", "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "gravatar_id": "", "url": "https://api.github.com/users/astariul", "html_url": "https://github.com/astariul", "followers_url": "https://api.github.com/users/ast...
[]
closed
false
null
[]
null
[ "Hi, can you give a full self-contained example to reproduce this behavior?", "> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)", "> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n...
2020-07-09T04:39:37
2022-09-09T15:20:55
2020-07-20T23:48:37
NONE
null
null
null
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/361/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/361/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/360
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/360/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/360/comments
https://api.github.com/repos/huggingface/datasets/issues/360/events
https://github.com/huggingface/datasets/issues/360
653,687,176
MDU6SXNzdWU2NTM2ODcxNzY=
360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
{ "login": "jarednielsen", "id": 4564897, "node_id": "MDQ6VXNlcjQ1NjQ4OTc=", "avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jarednielsen", "html_url": "https://github.com/jarednielsen", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[ "Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.", "You're two steps ahead of me :) In my testing, it also wor...
2020-07-09T01:04:43
2020-07-09T19:31:51
2020-07-09T19:31:51
CONTRIBUTOR
null
null
null
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/360/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/360/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/359
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/359/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/359/comments
https://api.github.com/repos/huggingface/datasets/issues/359/events
https://github.com/huggingface/datasets/issues/359
653,656,279
MDU6SXNzdWU2NTM2NTYyNzk=
359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
{ "login": "timothyjlaurent", "id": 2000204, "node_id": "MDQ6VXNlcjIwMDAyMDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2000204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/timothyjlaurent", "html_url": "https://github.com/timothyjlaurent", "followers_url": "https://api.g...
[]
closed
false
null
[]
null
[ "Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", ...
2020-07-08T23:24:05
2020-07-10T14:52:06
2020-07-10T14:52:06
NONE
null
null
null
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/359/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/359/timeline
null
completed
https://api.github.com/repos/huggingface/datasets/issues/355
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/355/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/355/comments
https://api.github.com/repos/huggingface/datasets/issues/355/events
https://github.com/huggingface/datasets/issues/355
653,451,013
MDU6SXNzdWU2NTM0NTEwMTM=
355
can't load SNLI dataset
{ "login": "jxmorris12", "id": 13238952, "node_id": "MDQ6VXNlcjEzMjM4OTUy", "avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jxmorris12", "html_url": "https://github.com/jxmorris12", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or ...
2020-07-08T16:54:14
2020-07-18T05:15:57
2020-07-15T07:59:01
CONTRIBUTOR
null
null
null
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/355/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/355/timeline
null
completed