url stringlengths 58 61 | repository_url stringclasses 1 value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.07B | node_id stringlengths 18 32 | number int64 1 3.39k | title stringlengths 1 276 | user dict | labels list | state stringclasses 1 value | locked bool 1 class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,639B | updated_at int64 1,587B 1,639B | closed_at int64 1,587B 1,639B | author_association stringclasses 3 values | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | draft bool 2 classes | pull_request dict | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2290/comments | https://api.github.com/repos/huggingface/datasets/issues/2290/events | https://github.com/huggingface/datasets/pull/2290 | 871,145,817 | MDExOlB1bGxSZXF1ZXN0NjI2MjEyNTIz | 2,290 | Bbaw egyptian | {
"login": "phiwi",
"id": 54144149,
"node_id": "MDQ6VXNlcjU0MTQ0MTQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/54144149?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phiwi",
"html_url": "https://github.com/phiwi",
"followers_url": "https://api.github.com/users/phiwi/followers",
"following_url": "https://api.github.com/users/phiwi/following{/other_user}",
"gists_url": "https://api.github.com/users/phiwi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/phiwi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phiwi/subscriptions",
"organizations_url": "https://api.github.com/users/phiwi/orgs",
"repos_url": "https://api.github.com/users/phiwi/repos",
"events_url": "https://api.github.com/users/phiwi/events{/privacy}",
"received_events_url": "https://api.github.com/users/phiwi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @phiwi,\r\n\r\nThanks for contributing this nice dataset. If you have any blocking problem or question, do not hesitate to ask here. We are pleased to help you.\r\n\r\nCould you please first synchronize with our master branch? From your branch `bbaw_egyptian`, type:\r\n```\r\ngit fetch upstream master\r\ngit me... | 1,619,710,078,000 | 1,620,321,925,000 | 1,620,321,925,000 | CONTRIBUTOR | null | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :-) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2290/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2290/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2290",
"html_url": "https://github.com/huggingface/datasets/pull/2290",
"diff_url": "https://github.com/huggingface/datasets/pull/2290.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2290.patch",
"merged_at": 1620321925000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2289/comments | https://api.github.com/repos/huggingface/datasets/issues/2289/events | https://github.com/huggingface/datasets/pull/2289 | 871,118,573 | MDExOlB1bGxSZXF1ZXN0NjI2MTg5MDU3 | 2,289 | Allow collaborators to self-assign issues | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"What do you think, @lhoestq? 😉 \r\n\r\nI think this could be another step to facilitate community contributions.",
"@lhoestq, it doesn't exist in `transformers`... I picked the idea from `scikit-learn`, where I have previously collaborated...\r\n\r\nAnd sure, this must be documented! I just wanted first to know... | 1,619,708,826,000 | 1,619,807,296,000 | 1,619,807,296,000 | MEMBER | null | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2289/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2289/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2289",
"html_url": "https://github.com/huggingface/datasets/pull/2289",
"diff_url": "https://github.com/huggingface/datasets/pull/2289.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2289.patch",
"merged_at": 1619807296000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2288/comments | https://api.github.com/repos/huggingface/datasets/issues/2288/events | https://github.com/huggingface/datasets/issues/2288 | 871,111,235 | MDU6SXNzdWU4NzExMTEyMzU= | 2,288 | Load_dataset for local CSV files | {
"login": "sstojanoska",
"id": 17052700,
"node_id": "MDQ6VXNlcjE3MDUyNzAw",
"avatar_url": "https://avatars.githubusercontent.com/u/17052700?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sstojanoska",
"html_url": "https://github.com/sstojanoska",
"followers_url": "https://api.github.com/users/sstojanoska/followers",
"following_url": "https://api.github.com/users/sstojanoska/following{/other_user}",
"gists_url": "https://api.github.com/users/sstojanoska/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sstojanoska/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sstojanoska/subscriptions",
"organizations_url": "https://api.github.com/users/sstojanoska/orgs",
"repos_url": "https://api.github.com/users/sstojanoska/repos",
"events_url": "https://api.github.com/users/sstojanoska/events{/privacy}",
"received_events_url": "https://api.github.com/users/sstojanoska/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthis is not a standard CSV file (requires additional preprocessing) so I wouldn't label this as s bug. You could parse the examples with the regex module or the string API to extract the data, but the following approach is probably the easiest (once you load the data):\r\n```python\r\nimport ast\r\n# lo... | 1,619,708,470,000 | 1,623,764,966,000 | 1,623,764,966,000 | NONE | null | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
```
The method, loads each list as a string: (i.g "['I' , 'am', 'John']").
To solve this issue, I copied the Datasets.Features, created Sequence types ( instead of Value) and tried to cast the features type
```
new_features['tokens'] = Sequence(feature=Value(dtype='string', id=None))
new_features['labels'] = Sequence(feature=ClassLabel(num_classes=len(tag2idx), names=list(unique_tags)))
dataset = dataset.cast(new_features)
```
but I got the following error
```
ArrowNotImplementedError: Unsupported cast from string to list using function cast_list
```
Moreover, I tried to set feature parameter in load_dataset method, to my new_features, but this fails as well.
How can this be solved ? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2288/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2288/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2287 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2287/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2287/comments | https://api.github.com/repos/huggingface/datasets/issues/2287/events | https://github.com/huggingface/datasets/pull/2287 | 871,063,374 | MDExOlB1bGxSZXF1ZXN0NjI2MTQ0MTQ3 | 2,287 | Avoid copying table's record batches | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for fixing it. I actually included a similar fix in #2291 along with some updates in tests\r\nI'm closing this one in favor of #2291 if you don't mind.\r\n\r\nThanks again !"
] | 1,619,705,701,000 | 1,619,714,063,000 | 1,619,714,062,000 | CONTRIBUTOR | null | Fixes #2276 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2287/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2287/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2287",
"html_url": "https://github.com/huggingface/datasets/pull/2287",
"diff_url": "https://github.com/huggingface/datasets/pull/2287.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2287.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2286/comments | https://api.github.com/repos/huggingface/datasets/issues/2286/events | https://github.com/huggingface/datasets/pull/2286 | 871,032,393 | MDExOlB1bGxSZXF1ZXN0NjI2MTE5MTE2 | 2,286 | Fix metadata validation with config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,703,872,000 | 1,619,705,249,000 | 1,619,705,248,000 | MEMBER | null | I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2286/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2286/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2286",
"html_url": "https://github.com/huggingface/datasets/pull/2286",
"diff_url": "https://github.com/huggingface/datasets/pull/2286.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2286.patch",
"merged_at": 1619705248000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2285/comments | https://api.github.com/repos/huggingface/datasets/issues/2285/events | https://github.com/huggingface/datasets/issues/2285 | 871,005,236 | MDU6SXNzdWU4NzEwMDUyMzY= | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | {
"login": "danieldiezmallo",
"id": 46021411,
"node_id": "MDQ6VXNlcjQ2MDIxNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danieldiezmallo",
"html_url": "https://github.com/danieldiezmallo",
"followers_url": "https://api.github.com/users/danieldiezmallo/followers",
"following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions",
"organizations_url": "https://api.github.com/users/danieldiezmallo/orgs",
"repos_url": "https://api.github.com/users/danieldiezmallo/repos",
"events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}",
"received_events_url": "https://api.github.com/users/danieldiezmallo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for li... | 1,619,702,205,000 | 1,621,408,965,000 | 1,621,408,959,000 | NONE | null | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :)) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2285/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2284 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2284/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2284/comments | https://api.github.com/repos/huggingface/datasets/issues/2284/events | https://github.com/huggingface/datasets/pull/2284 | 870,932,710 | MDExOlB1bGxSZXF1ZXN0NjI2MDM5MDc5 | 2,284 | Initialize Imdb dataset as used in Don't Stop Pretraining Paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,697,158,000 | 1,619,700,874,000 | 1,619,700,874,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2284/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2284/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2284",
"html_url": "https://github.com/huggingface/datasets/pull/2284",
"diff_url": "https://github.com/huggingface/datasets/pull/2284.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2284.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/2283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2283/comments | https://api.github.com/repos/huggingface/datasets/issues/2283/events | https://github.com/huggingface/datasets/pull/2283 | 870,926,475 | MDExOlB1bGxSZXF1ZXN0NjI2MDM0MDk5 | 2,283 | Initialize imdb dataset from don't stop pretraining paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,696,694,000 | 1,619,697,024,000 | 1,619,697,024,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2283/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2283/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2283",
"html_url": "https://github.com/huggingface/datasets/pull/2283",
"diff_url": "https://github.com/huggingface/datasets/pull/2283.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2283.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/2282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2282/comments | https://api.github.com/repos/huggingface/datasets/issues/2282/events | https://github.com/huggingface/datasets/pull/2282 | 870,900,332 | MDExOlB1bGxSZXF1ZXN0NjI2MDEyMzM3 | 2,282 | Initialize imdb dataset from don't stop pretraining paper | {
"login": "BobbyManion",
"id": 52530809,
"node_id": "MDQ6VXNlcjUyNTMwODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/52530809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BobbyManion",
"html_url": "https://github.com/BobbyManion",
"followers_url": "https://api.github.com/users/BobbyManion/followers",
"following_url": "https://api.github.com/users/BobbyManion/following{/other_user}",
"gists_url": "https://api.github.com/users/BobbyManion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BobbyManion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BobbyManion/subscriptions",
"organizations_url": "https://api.github.com/users/BobbyManion/orgs",
"repos_url": "https://api.github.com/users/BobbyManion/repos",
"events_url": "https://api.github.com/users/BobbyManion/events{/privacy}",
"received_events_url": "https://api.github.com/users/BobbyManion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,695,076,000 | 1,619,696,631,000 | 1,619,696,631,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2282/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2282/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2282",
"html_url": "https://github.com/huggingface/datasets/pull/2282",
"diff_url": "https://github.com/huggingface/datasets/pull/2282.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2282.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/2281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2281/comments | https://api.github.com/repos/huggingface/datasets/issues/2281/events | https://github.com/huggingface/datasets/pull/2281 | 870,792,784 | MDExOlB1bGxSZXF1ZXN0NjI1OTI2MjAw | 2,281 | Update multi_woz_v22 checksum | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,687,351,000 | 1,619,703,695,000 | 1,619,703,694,000 | MEMBER | null | Fix issue https://github.com/huggingface/datasets/issues/1876
The files were changed in https://github.com/budzianowski/multiwoz/pull/72 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2281/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2281/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2281",
"html_url": "https://github.com/huggingface/datasets/pull/2281",
"diff_url": "https://github.com/huggingface/datasets/pull/2281.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2281.patch",
"merged_at": 1619703694000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2280/comments | https://api.github.com/repos/huggingface/datasets/issues/2280/events | https://github.com/huggingface/datasets/pull/2280 | 870,780,431 | MDExOlB1bGxSZXF1ZXN0NjI1OTE2Mzcy | 2,280 | Fixed typo seperate->separate | {
"login": "laksh9950",
"id": 32505743,
"node_id": "MDQ6VXNlcjMyNTA1NzQz",
"avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/laksh9950",
"html_url": "https://github.com/laksh9950",
"followers_url": "https://api.github.com/users/laksh9950/followers",
"following_url": "https://api.github.com/users/laksh9950/following{/other_user}",
"gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}",
"starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions",
"organizations_url": "https://api.github.com/users/laksh9950/orgs",
"repos_url": "https://api.github.com/users/laksh9950/repos",
"events_url": "https://api.github.com/users/laksh9950/events{/privacy}",
"received_events_url": "https://api.github.com/users/laksh9950/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for the fix :)\r\nThe CI fail isn't related to your PR. I opened a PR #2286 to fix the CI.\r\nWe'll wait for #2286 to be merged to master first if you don't mind",
"The PR has been merged ! Feel free to merge master into your branch to fix the CI"
] | 1,619,686,546,000 | 1,619,714,482,000 | 1,619,714,476,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2280/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2280/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2280",
"html_url": "https://github.com/huggingface/datasets/pull/2280",
"diff_url": "https://github.com/huggingface/datasets/pull/2280.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2280.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/2279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2279/comments | https://api.github.com/repos/huggingface/datasets/issues/2279/events | https://github.com/huggingface/datasets/issues/2279 | 870,431,662 | MDU6SXNzdWU4NzA0MzE2NjI= | 2,279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | {
"login": "tginart",
"id": 11379648,
"node_id": "MDQ6VXNlcjExMzc5NjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/11379648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tginart",
"html_url": "https://github.com/tginart",
"followers_url": "https://api.github.com/users/tginart/followers",
"following_url": "https://api.github.com/users/tginart/following{/other_user}",
"gists_url": "https://api.github.com/users/tginart/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tginart/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tginart/subscriptions",
"organizations_url": "https://api.github.com/users/tginart/orgs",
"repos_url": "https://api.github.com/users/tginart/repos",
"events_url": "https://api.github.com/users/tginart/events{/privacy}",
"received_events_url": "https://api.github.com/users/tginart/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"From the trace this seems like an error in the tokenizer library instead.\r\n\r\nDo you mind opening an issue at https://github.com/huggingface/tokenizers instead?",
"Hi @tginart, thanks for reporting.\r\n\r\nI think this issue is already open at `tokenizers` library: https://github.com/huggingface/tokenizers/is... | 1,619,647,687,000 | 1,619,682,162,000 | 1,619,682,162,000 | NONE | null | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure if there is anything that can be done about this, but I'd like to confirm that using huggingface/datasets requires either an upgrade to Ubuntu 19/20 or a hand-rolled install of a higher version of GLIBC.
## Steps to reproduce the bug
1. clone the transformers repo
2. move to examples/pytorch/language-modeling
3. run example command:
```python run_clm.py --model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm```
## Expected results
As described in the transformers repo.
## Actual results
```Traceback (most recent call last):
File "run_clm.py", line 34, in <module>
from transformers import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2487, in __getattr__
return super().__getattr__(name)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/file_utils.py", line 1699, in __getattr__
module = self._get_module(self._class_to_module[name])
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/__init__.py", line 2481, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/__init__.py", line 19, in <module>
from . import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/__init__.py", line 23, in <module>
from .tokenization_layoutlm import LayoutLMTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/layoutlm/tokenization_layoutlm.py", line 19, in <module>
from ..bert.tokenization_bert import BertTokenizer
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/models/bert/tokenization_bert.py", line 23, in <module>
from ...tokenization_utils import PreTrainedTokenizer, _is_control, _is_punctuation, _is_whitespace
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils.py", line 26, in <module>
from .tokenization_utils_base import (
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 68, in <module>
from tokenizers import AddedToken
File "/home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/__init__.py", line 79, in <module>
from .tokenizers import (
ImportError: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.29' not found (required by /home/tginart/anaconda3/envs/huggingface/lib/python3.7/site-packages/tokenizers/tokenizers.cpython-37m-x86_64-linux-gnu.so)
```
## Versions
Paste the output of the following code:
```
- Datasets: 1.6.1
- Python: 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
- Platform: Linux-4.15.0-128-generic-x86_64-with-debian-buster-sid
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2279/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2279/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2278/comments | https://api.github.com/repos/huggingface/datasets/issues/2278/events | https://github.com/huggingface/datasets/issues/2278 | 870,088,059 | MDU6SXNzdWU4NzAwODgwNTk= | 2,278 | Loss result inGptNeoForCasual | {
"login": "Yossillamm",
"id": 51174606,
"node_id": "MDQ6VXNlcjUxMTc0NjA2",
"avatar_url": "https://avatars.githubusercontent.com/u/51174606?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Yossillamm",
"html_url": "https://github.com/Yossillamm",
"followers_url": "https://api.github.com/users/Yossillamm/followers",
"following_url": "https://api.github.com/users/Yossillamm/following{/other_user}",
"gists_url": "https://api.github.com/users/Yossillamm/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Yossillamm/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Yossillamm/subscriptions",
"organizations_url": "https://api.github.com/users/Yossillamm/orgs",
"repos_url": "https://api.github.com/users/Yossillamm/repos",
"events_url": "https://api.github.com/users/Yossillamm/events{/privacy}",
"received_events_url": "https://api.github.com/users/Yossillamm/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! I think you might have to ask on the `transformers` repo on or the forum at https://discuss.huggingface.co/\r\n\r\nClosing since it's not related to this library"
] | 1,619,624,392,000 | 1,620,317,663,000 | 1,620,317,663,000 | NONE | null | Is there any way you give the " loss" and "logits" results in the gpt neo api? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2278/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2278/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2276 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2276/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2276/comments | https://api.github.com/repos/huggingface/datasets/issues/2276/events | https://github.com/huggingface/datasets/issues/2276 | 870,010,511 | MDU6SXNzdWU4NzAwMTA1MTE= | 2,276 | concatenate_datasets loads all the data into memory | {
"login": "TaskManager91",
"id": 7063207,
"node_id": "MDQ6VXNlcjcwNjMyMDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7063207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TaskManager91",
"html_url": "https://github.com/TaskManager91",
"followers_url": "https://api.github.com/users/TaskManager91/followers",
"following_url": "https://api.github.com/users/TaskManager91/following{/other_user}",
"gists_url": "https://api.github.com/users/TaskManager91/gists{/gist_id}",
"starred_url": "https://api.github.com/users/TaskManager91/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/TaskManager91/subscriptions",
"organizations_url": "https://api.github.com/users/TaskManager91/orgs",
"repos_url": "https://api.github.com/users/TaskManager91/repos",
"events_url": "https://api.github.com/users/TaskManager91/events{/privacy}",
"received_events_url": "https://api.github.com/users/TaskManager91/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Therefore, when I try to concatenate larger datasets (5x 35GB data sets) I also get an out of memory error, since over 90GB of swap space was used at the time of the crash:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nMemoryError Traceba... | 1,619,620,041,000 | 1,620,031,315,000 | 1,620,031,315,000 | NONE | null | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.

## Steps to reproduce the bug
```python
from datasets import concatenate_datasets, load_from_disk
test_sampled_pro = load_from_disk("test_sampled_pro")
val_sampled_pro = load_from_disk("val_sampled_pro")
big_set = concatenate_datasets([test_sampled_pro, val_sampled_pro])
# Loaded to memory
big_set.save_to_disk("big_set")
# Loaded to memory
big_set = concatenate_datasets([big_set, val_sampled_pro])
```
## Expected results
The data should be loaded into memory in batches and then saved directly to disk.
## Actual results
The entire data set is loaded into the memory and then saved to the hard disk.
## Versions
Paste the output of the following code:
```python
- Datasets: 1.6.1
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2276/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2276/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2275 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2275/comments | https://api.github.com/repos/huggingface/datasets/issues/2275/events | https://github.com/huggingface/datasets/issues/2275 | 869,378,311 | MDU6SXNzdWU4NjkzNzgzMTE= | 2,275 | SNLI dataset has labels of -1 | {
"login": "puzzler10",
"id": 17426779,
"node_id": "MDQ6VXNlcjE3NDI2Nzc5",
"avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/puzzler10",
"html_url": "https://github.com/puzzler10",
"followers_url": "https://api.github.com/users/puzzler10/followers",
"following_url": "https://api.github.com/users/puzzler10/following{/other_user}",
"gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}",
"starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions",
"organizations_url": "https://api.github.com/users/puzzler10/orgs",
"repos_url": "https://api.github.com/users/puzzler10/repos",
"events_url": "https://api.github.com/users/puzzler10/events{/privacy}",
"received_events_url": "https://api.github.com/users/puzzler10/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(... | 1,619,569,945,000 | 1,621,258,458,000 | 1,621,258,458,000 | NONE | null | There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set.
It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained.
Perhaps the documentation should be updated. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2275/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2274 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2274/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2274/comments | https://api.github.com/repos/huggingface/datasets/issues/2274/events | https://github.com/huggingface/datasets/pull/2274 | 869,186,276 | MDExOlB1bGxSZXF1ZXN0NjI0NTkyMjQx | 2,274 | Always update metadata in arrow schema | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,551,317,000 | 1,619,690,271,000 | 1,619,690,270,000 | MEMBER | null | We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types.
For each function that transforms the feature types of the dataset, I added a step in the tests to make sure the metadata in the arrow schema are up to date.
I also added a line to update the metadata directly in the Dataset.__init__ method.
This way even a dataset instantiated with __init__ will have a table with the right metadata.
cc @mariosasko | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2274/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2274/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2274",
"html_url": "https://github.com/huggingface/datasets/pull/2274",
"diff_url": "https://github.com/huggingface/datasets/pull/2274.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2274.patch",
"merged_at": 1619690270000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2273/comments | https://api.github.com/repos/huggingface/datasets/issues/2273/events | https://github.com/huggingface/datasets/pull/2273 | 869,046,290 | MDExOlB1bGxSZXF1ZXN0NjI0NDcxODc1 | 2,273 | Added CUAD metrics | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,542,152,000 | 1,619,704,787,000 | 1,619,704,787,000 | CONTRIBUTOR | null | `EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2273/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2273/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2273",
"html_url": "https://github.com/huggingface/datasets/pull/2273",
"diff_url": "https://github.com/huggingface/datasets/pull/2273.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2273.patch",
"merged_at": 1619704787000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2272 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2272/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2272/comments | https://api.github.com/repos/huggingface/datasets/issues/2272/events | https://github.com/huggingface/datasets/issues/2272 | 869,017,977 | MDU6SXNzdWU4NjkwMTc5Nzc= | 2,272 | Bug in Dataset.class_encode_column | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This has been fixed in this commit: https://github.com/huggingface/datasets/pull/2254/commits/88676c930216cd4cc31741b99827b477d2b46cb6\r\n\r\nIt was introduced in #2246 : using map with `input_columns` doesn't return the other columns anymore"
] | 1,619,539,998,000 | 1,619,787,267,000 | 1,619,787,267,000 | MEMBER | null | ## Describe the bug
All the rest of the columns except the one passed to `Dataset.class_encode_column` are discarded.
## Expected results
All the original columns should be kept.
This needs regression tests.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2272/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2272/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2270/comments | https://api.github.com/repos/huggingface/datasets/issues/2270/events | https://github.com/huggingface/datasets/pull/2270 | 868,913,660 | MDExOlB1bGxSZXF1ZXN0NjI0MzU5Njky | 2,270 | Fix iterable interface expected by numpy | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It's been fixed in this commit: https://github.com/huggingface/datasets/commit/549110e08238b3716a5904667095fb003acda54e\r\n\r\nBasically #2246 broke querying an index with a simple iterable.\r\nWith the fix, it's again possible to use iterables and we can keep RandIter as it is.\r\n\r\nClosing since the fix is alr... | 1,619,534,156,000 | 1,619,631,567,000 | 1,619,631,567,000 | MEMBER | null | Numpy expects the old iterable interface with `__getitem__` instead of `__iter__`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2270/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2270/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2270",
"html_url": "https://github.com/huggingface/datasets/pull/2270",
"diff_url": "https://github.com/huggingface/datasets/pull/2270.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2270.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2269/comments | https://api.github.com/repos/huggingface/datasets/issues/2269/events | https://github.com/huggingface/datasets/pull/2269 | 868,878,468 | MDExOlB1bGxSZXF1ZXN0NjI0MzMwNDA3 | 2,269 | Fix query table with iterable | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,531,978,000 | 1,619,533,317,000 | 1,619,533,316,000 | MEMBER | null | The benchmark runs are failing on master because it tries to use an iterable to query the dataset.
However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable.
This PR fixes it | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2269/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2269/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2269",
"html_url": "https://github.com/huggingface/datasets/pull/2269",
"diff_url": "https://github.com/huggingface/datasets/pull/2269.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2269.patch",
"merged_at": 1619533316000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2268/comments | https://api.github.com/repos/huggingface/datasets/issues/2268/events | https://github.com/huggingface/datasets/pull/2268 | 868,773,380 | MDExOlB1bGxSZXF1ZXN0NjI0MjQyODg1 | 2,268 | Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq note that the segfault also occurs on Linux.",
"Created the ticket at\r\nhttps://issues.apache.org/jira/browse/ARROW-12568",
"@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 sy... | 1,619,524,708,000 | 1,623,501,889,000 | 1,619,531,000,000 | MEMBER | null | This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2268/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2268/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2268",
"html_url": "https://github.com/huggingface/datasets/pull/2268",
"diff_url": "https://github.com/huggingface/datasets/pull/2268.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2268.patch",
"merged_at": 1619531000000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2266 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2266/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2266/comments | https://api.github.com/repos/huggingface/datasets/issues/2266/events | https://github.com/huggingface/datasets/pull/2266 | 867,864,353 | MDExOlB1bGxSZXF1ZXN0NjIzNDY1OTI5 | 2,266 | Make tests run faster | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"LOL, I was also working on something similar 😅. I'm gonna have a look!!!",
"Sorry I didn't know you were also working on it ^^'\r\nAnd yes I 100% agree with you on the points you mentioned. We should definitely improve the coverage. It would be nice to have a clearer separation to know which tests in the suite ... | 1,619,452,540,000 | 1,619,690,413,000 | 1,619,690,404,000 | MEMBER | null | From 7min to 2min to run pytest.
Ideally we should keep the whole CI run time below 10min.
In this PR I removed the remote tests that were never used.
I also replaced nested parametrized tests with unit tests.
This makes me think that we could still add more high level tests to check for a few combinations of parameters (but not all of them since there are too many of them).
Let me know what you think
Finally in another PR we can also separate in two circleci jobs:
- the tests of the code code of the lib
- the tests of the all the dataset/metric scripts. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2266/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2266/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2266",
"html_url": "https://github.com/huggingface/datasets/pull/2266",
"diff_url": "https://github.com/huggingface/datasets/pull/2266.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2266.patch",
"merged_at": 1619690404000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2265 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2265/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2265/comments | https://api.github.com/repos/huggingface/datasets/issues/2265/events | https://github.com/huggingface/datasets/pull/2265 | 867,490,646 | MDExOlB1bGxSZXF1ZXN0NjIzMTUyOTg5 | 2,265 | Update black | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,429,709,000 | 1,619,430,468,000 | 1,619,430,467,000 | MEMBER | null | Latest black version 21.4b0 requires to reformat most dataset scripts and also the core code of the lib.
This makes the CI currently fail on master | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2265/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2265/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2265",
"html_url": "https://github.com/huggingface/datasets/pull/2265",
"diff_url": "https://github.com/huggingface/datasets/pull/2265.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2265.patch",
"merged_at": 1619430467000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2264/comments | https://api.github.com/repos/huggingface/datasets/issues/2264/events | https://github.com/huggingface/datasets/pull/2264 | 867,476,228 | MDExOlB1bGxSZXF1ZXN0NjIzMTQwODA1 | 2,264 | Fix memory issue in multiprocessing: Don't pickle table index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The code quality check is going to be fixed by #2265 ",
"The memory issue didn't come from `self.__dict__.copy()` but from the fact that this dict contains `_batches` which has all the batches of the table in it.\r\nTherefore for a MemoryMappedTable all the data in `_batches` were copied in memory when pickling ... | 1,619,428,895,000 | 1,619,433,028,000 | 1,619,431,694,000 | MEMBER | null | The table index is currently being pickled when doing multiprocessing, which brings all the record batches of the dataset in memory.
I fixed that by not pickling the index attributes. Therefore each process has to rebuild the index when unpickling the table.
Fix issue #2256
We'll do a patch release asap ! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2264/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2264/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2264",
"html_url": "https://github.com/huggingface/datasets/pull/2264",
"diff_url": "https://github.com/huggingface/datasets/pull/2264.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2264.patch",
"merged_at": 1619431694000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2263/comments | https://api.github.com/repos/huggingface/datasets/issues/2263/events | https://github.com/huggingface/datasets/pull/2263 | 867,420,912 | MDExOlB1bGxSZXF1ZXN0NjIzMDk0NTcy | 2,263 | test data added, dataset_infos updated | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,619,425,638,000 | 1,619,688,621,000 | 1,619,688,620,000 | CONTRIBUTOR | null | Fixes #2262. Thanks for pointing out issue with dataset @jinmang2! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2263/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2263/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2263",
"html_url": "https://github.com/huggingface/datasets/pull/2263",
"diff_url": "https://github.com/huggingface/datasets/pull/2263.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2263.patch",
"merged_at": 1619688620000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2262/comments | https://api.github.com/repos/huggingface/datasets/issues/2262/events | https://github.com/huggingface/datasets/issues/2262 | 867,325,351 | MDU6SXNzdWU4NjczMjUzNTE= | 2,262 | NewsPH NLI dataset script fails to access test data. | {
"login": "jinmang2",
"id": 37775784,
"node_id": "MDQ6VXNlcjM3Nzc1Nzg0",
"avatar_url": "https://avatars.githubusercontent.com/u/37775784?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinmang2",
"html_url": "https://github.com/jinmang2",
"followers_url": "https://api.github.com/users/jinmang2/followers",
"following_url": "https://api.github.com/users/jinmang2/following{/other_user}",
"gists_url": "https://api.github.com/users/jinmang2/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jinmang2/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jinmang2/subscriptions",
"organizations_url": "https://api.github.com/users/jinmang2/orgs",
"repos_url": "https://api.github.com/users/jinmang2/repos",
"events_url": "https://api.github.com/users/jinmang2/events{/privacy}",
"received_events_url": "https://api.github.com/users/jinmang2/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Thanks @bhavitvyamalik for the fix !\r\nThe fix will be available in the next release.\r\nIt's already available on the `master` branch. For now you can either install `datasets` from source or use `script_version=\"master\"` in `load_dataset` to use the fixed version of this dataset."
] | 1,619,419,481,000 | 1,619,688,723,000 | 1,619,688,620,000 | NONE | null | In Newsph-NLI Dataset (#1192), it fails to access test data.
According to the script below, the download manager will download the train data when trying to download the test data.
https://github.com/huggingface/datasets/blob/2a2dd6316af2cc7fdf24e4779312e8ee0c7ed98b/datasets/newsph_nli/newsph_nli.py#L71
If you download it according to the script above, you can see that train and test receive the same data as shown below.
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
```
In local, I modified the code of the source as below and got the correct result.
```python
71 test_path = os.path.join(download_path, "test.csv")
```
```python
>>> from datasets import load_dataset
>>> newsph_nli = load_dataset(path="./datasets/newsph_nli.py")
>>> newsph_nli
DatasetDict({
train: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 420000
})
test: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 9000
})
validation: Dataset({
features: ['premise', 'hypothesis', 'label'],
num_rows: 90000
})
})
>>> newsph_nli["train"][0]
{'hypothesis': 'Ito ang dineklara ni Atty. Romulo Macalintal, abogado ni Robredo, kaugnay ng pagsisimula ng preliminary conference ngayong hapon sa Presidential Electoral Tribunal (PET).',
'label': 1,
'premise': '"Hindi ko ugali ang mamulitika; mas gusto kong tahimik na magtrabaho. Pero sasabihin ko ito ngayon: ang tapang, lakas, at diskarte, hindi nadadaan sa mapanirang salita. Ang kailangan ng taumbayan ay tapang sa gawa," ayon kay Robredo sa inilabas nitong statement.'}
>>> newsph_nli["test"][0]
{'hypothesis': '-- JAI (@JaiPaller) September 13, 2019',
'label': 1,
'premise': 'Pinag-iingat ng Konsulado ng Pilipinas sa Dubai ang publiko, partikular ang mga donor, laban sa mga scam na gumagamit ng mga charitable organization.'}
```
I don't have experience with open source pull requests, so I suggest that you reflect them in the source.
Thank you for reading :) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2262/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2262/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2261/comments | https://api.github.com/repos/huggingface/datasets/issues/2261/events | https://github.com/huggingface/datasets/pull/2261 | 867,088,818 | MDExOlB1bGxSZXF1ZXN0NjIyODIxNzQw | 2,261 | Improve ReadInstruction logic and update docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Ready for the final review"
] | 1,619,377,646,000 | 1,621,275,884,000 | 1,621,270,137,000 | CONTRIBUTOR | null | Improve ReadInstruction logic and docs. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2261/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2261/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2261",
"html_url": "https://github.com/huggingface/datasets/pull/2261",
"diff_url": "https://github.com/huggingface/datasets/pull/2261.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2261.patch",
"merged_at": 1621270137000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2260/comments | https://api.github.com/repos/huggingface/datasets/issues/2260/events | https://github.com/huggingface/datasets/pull/2260 | 866,961,697 | MDExOlB1bGxSZXF1ZXN0NjIyNzMwODYx | 2,260 | GooAQ dataset added | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for adding this one !\r\nThe download manager does support downloading files on git lfs via their github url. No need for a manual download option ;)"
] | 1,619,342,808,000 | 1,620,376,577,000 | 1,620,376,577,000 | CONTRIBUTOR | null | @lhoestq here the dataset is stored with Git LFS. Should I add option for manual downloading of dataset using `git lfs pull` post repo cloning or can we accommodate this in the current `download_and_extract`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2260/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2260/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2260",
"html_url": "https://github.com/huggingface/datasets/pull/2260",
"diff_url": "https://github.com/huggingface/datasets/pull/2260.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2260.patch",
"merged_at": 1620376577000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2259/comments | https://api.github.com/repos/huggingface/datasets/issues/2259/events | https://github.com/huggingface/datasets/pull/2259 | 866,880,092 | MDExOlB1bGxSZXF1ZXN0NjIyNjc2ODA0 | 2,259 | Add support for Split.ALL | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:\r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"sst\", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError\r\n```\r\n\r\nEDIT:\r\nActually,... | 1,619,315,142,000 | 1,624,868,487,000 | 1,624,868,487,000 | CONTRIBUTOR | null | The title says it all. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2259/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2259/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2259",
"html_url": "https://github.com/huggingface/datasets/pull/2259",
"diff_url": "https://github.com/huggingface/datasets/pull/2259.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2259.patch",
"merged_at": 1624868487000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2258/comments | https://api.github.com/repos/huggingface/datasets/issues/2258/events | https://github.com/huggingface/datasets/pull/2258 | 866,870,588 | MDExOlB1bGxSZXF1ZXN0NjIyNjcxNTQy | 2,258 | Fix incorrect update_metadata_with_features calls in ArrowDataset | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Maybe a test that runs the functions that call `update_metadata_with_features` and checks if metadata was updated would be nice to prevent this from happening in the future."
] | 1,619,311,718,000 | 1,619,457,390,000 | 1,619,456,044,000 | CONTRIBUTOR | null | Fixes bugs in the `unpdate_metadata_with_features` calls (caused by changes in #2151) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2258/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2258/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2258",
"html_url": "https://github.com/huggingface/datasets/pull/2258",
"diff_url": "https://github.com/huggingface/datasets/pull/2258.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2258.patch",
"merged_at": 1619456044000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2257 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2257/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2257/comments | https://api.github.com/repos/huggingface/datasets/issues/2257/events | https://github.com/huggingface/datasets/pull/2257 | 866,755,203 | MDExOlB1bGxSZXF1ZXN0NjIyNTkwMDQw | 2,257 | added metrics for CUAD | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here\r\n\r\n@bhavitvyamalik I guess the mentioned metrics are enough but it would be better if ... | 1,619,273,394,000 | 1,619,690,018,000 | 1,619,540,192,000 | CONTRIBUTOR | null | For now I've added F1, AUPR, Precision at 80% recall, and Precision at 90%. Last 3 metrics were reported in the [paper](https://arxiv.org/pdf/2103.06268.pdf). Please let me know if we require `exact_match` metric too here | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2257/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2257/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2257",
"html_url": "https://github.com/huggingface/datasets/pull/2257",
"diff_url": "https://github.com/huggingface/datasets/pull/2257.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2257.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2256 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2256/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2256/comments | https://api.github.com/repos/huggingface/datasets/issues/2256/events | https://github.com/huggingface/datasets/issues/2256 | 866,708,609 | MDU6SXNzdWU4NjY3MDg2MDk= | 2,256 | Running `datase.map` with `num_proc > 1` uses a lot of memory | {
"login": "roskoN",
"id": 8143425,
"node_id": "MDQ6VXNlcjgxNDM0MjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/8143425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/roskoN",
"html_url": "https://github.com/roskoN",
"followers_url": "https://api.github.com/users/roskoN/followers",
"following_url": "https://api.github.com/users/roskoN/following{/other_user}",
"gists_url": "https://api.github.com/users/roskoN/gists{/gist_id}",
"starred_url": "https://api.github.com/users/roskoN/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/roskoN/subscriptions",
"organizations_url": "https://api.github.com/users/roskoN/orgs",
"repos_url": "https://api.github.com/users/roskoN/repos",
"events_url": "https://api.github.com/users/roskoN/events{/privacy}",
"received_events_url": "https://api.github.com/users/roskoN/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Thanks for reporting ! We are working on this and we'll do a patch release very soon.",
"We did a patch release to fix this issue.\r\nIt should be fixed in the new version 1.6.1\r\n\r\nThanks again for reporting and for the details :)"
] | 1,619,258,180,000 | 1,619,457,135,000 | 1,619,457,135,000 | NONE | null | ## Describe the bug
Running `datase.map` with `num_proc > 1` leads to a tremendous memory usage that requires swapping on disk and it becomes very slow.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dstc8_datset = load_dataset("roskoN/dstc8-reddit-corpus", keep_in_memory=False)
def _prepare_sample(batch):
return {"input_ids": list(), "attention_mask": list()}
for split_name, dataset_split in list(dstc8_datset.items()):
print(f"Processing {split_name}")
encoded_dataset_split = dataset_split.map(
function=_prepare_sample,
batched=True,
num_proc=4,
remove_columns=dataset_split.column_names,
batch_size=10,
writer_batch_size=10,
keep_in_memory=False,
)
print(encoded_dataset_split)
path = f"./data/encoded_{split_name}"
encoded_dataset_split.save_to_disk(path)
```
## Expected results
Memory usage should stay within reasonable boundaries.
## Actual results
This is htop-output from running the provided script.

## Versions
```
- Datasets: 1.6.0
- Python: 3.8.8 (default, Apr 13 2021, 19:58:26)
[GCC 7.3.0]
- Platform: Linux-4.19.128-microsoft-standard-x86_64-with-glibc2.10
```
Running on WSL2
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2256/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2256/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2255 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2255/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2255/comments | https://api.github.com/repos/huggingface/datasets/issues/2255/events | https://github.com/huggingface/datasets/pull/2255 | 866,242,892 | MDExOlB1bGxSZXF1ZXN0NjIyMTc0Njg4 | 2,255 | Task casting for text classification & question answering | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @abhi1thakur ",
"Looks really nice so far, thanks !\r\nMaybe if a dataset doesn't have a template for a specific task we could try the default template of this task ?",
"hey @SBrandeis @lhoestq,\r\n\r\ni now have a better idea about what you guys are trying to achieve with the task templates and have a few ... | 1,619,193,641,000 | 1,621,344,696,000 | 1,621,344,695,000 | CONTRIBUTOR | null | This PR implements task preparation for a given task, in the continuation of #2143
Task taxonomy follows 🤗 Transformers's pipelines taxonomy: https://github.com/huggingface/transformers/tree/master/src/transformers/pipelines
Edit by @lewtun:
This PR implements support for the following tasks:
* `text-classification`
* `question-answering`
The intended usage is as follows:
```python
# Load a dataset with default column names / features
ds = load_dataset("dataset_name")
# Cast column names / features to schema. Casting is defined in the dataset's `DatasetInfo`
ds = ds.prepare_for_task(task="text-classification")
# Casting can also be realised during load
ds = load_dataset("dataset_name", task="text-classification")
# We can also combine shared tasks across dataset concatenation
ds1 = load_dataset("dataset_name_1", task="text-classification")
ds2 = load_dataset("dataset_name_2", task="text-classification")
# If the tasks have the same schema, so will `ds_concat`
ds_concat = concatenate_datasets([ds1, ds2])
```
Note that the current implementation assumes that `DatasetInfo.task_templates` has been pre-defined by the user / contributor when overriding the `MyDataset(GeneratorBasedBuilder)._info` function.
As pointed out by @SBrandeis, for evaluation we'll need a way to detect which datasets are already have a compatible schema so we don't have to edit hundreds of dataset scripts. One possibility is to check if the schema features are a subset of the dataset ones, e.g.
```python
squad = load_dataset("./datasets/squad", split="train")
qa = QuestionAnswering()
schema = Features({**qa.input_schema, **qa.label_schema})
assert all(item in squad.features.items() for item in schema.items())
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2255/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2255/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2255",
"html_url": "https://github.com/huggingface/datasets/pull/2255",
"diff_url": "https://github.com/huggingface/datasets/pull/2255.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2255.patch",
"merged_at": 1621344695000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2254 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2254/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2254/comments | https://api.github.com/repos/huggingface/datasets/issues/2254/events | https://github.com/huggingface/datasets/pull/2254 | 866,169,312 | MDExOlB1bGxSZXF1ZXN0NjIyMTE1NDI0 | 2,254 | Update format, fingerprint and indices after add_item | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I renamed the variable, added a test for dataset._indices and fixed an issue with class_encode_column"
] | 1,619,188,309,000 | 1,619,541,049,000 | 1,619,541,048,000 | MEMBER | null | Added fingerprint and format update wrappers + update the indices by adding the index of the newly added item in the table. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2254/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2254/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2254",
"html_url": "https://github.com/huggingface/datasets/pull/2254",
"diff_url": "https://github.com/huggingface/datasets/pull/2254.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2254.patch",
"merged_at": 1619541048000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2253/comments | https://api.github.com/repos/huggingface/datasets/issues/2253/events | https://github.com/huggingface/datasets/pull/2253 | 866,034,321 | MDExOlB1bGxSZXF1ZXN0NjIyMDA2Njg3 | 2,253 | Perform minor refactoring: use config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | closed | false | null | [] | null | [
"@lhoestq is there a problem in the master branch? I got a segmentation fault...\r\n```\r\ntests/test_table.py::test_concatenation_table_cast[in_memory] Fatal Python error: Segmentation fault\r\n```",
"Oh wow. Let me re-run the CI just to make sure",
"Hmm interesting, the segfault is still there. I'm investigat... | 1,619,178,347,000 | 1,622,106,765,000 | 1,619,535,779,000 | MEMBER | null | Perform minor refactoring related to `config`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2253/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2253/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2253",
"html_url": "https://github.com/huggingface/datasets/pull/2253",
"diff_url": "https://github.com/huggingface/datasets/pull/2253.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2253.patch",
"merged_at": 1619535778000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2248 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2248/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2248/comments | https://api.github.com/repos/huggingface/datasets/issues/2248/events | https://github.com/huggingface/datasets/pull/2248 | 864,853,447 | MDExOlB1bGxSZXF1ZXN0NjIxMDEyNzg5 | 2,248 | Implement Dataset to JSON | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"id": 6644287,
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"title": "1.7",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 3,
"state": "closed",
"created_at": 1617974191000,
"updated_at": 1622478053000,
"due_on": 1620975600000,
"closed_at": 1622478053000
} | [] | 1,619,092,011,000 | 1,619,537,361,000 | 1,619,537,360,000 | MEMBER | null | Implement `Dataset.to_json`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2248/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2248/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2248",
"html_url": "https://github.com/huggingface/datasets/pull/2248",
"diff_url": "https://github.com/huggingface/datasets/pull/2248.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2248.patch",
"merged_at": 1619537360000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2247/comments | https://api.github.com/repos/huggingface/datasets/issues/2247/events | https://github.com/huggingface/datasets/pull/2247 | 864,817,520 | MDExOlB1bGxSZXF1ZXN0NjIwOTgzNzY3 | 2,247 | Implement Dataset from Parquet | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/7",
"html_url": "https://github.com/huggingface/datasets/milestone/7",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/7/labels",
"id": 6931350,
"node_id": "MDk6TWlsZXN0b25lNjkzMTM1MA==",
"number": 7,
"title": "1.11",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 2,
"state": "closed",
"created_at": 1625809740000,
"updated_at": 1630560843000,
"due_on": 1627628400000,
"closed_at": 1630560843000
} | [
"Hi @albertvillanova , I'll implement the parquet builder as an ArrowBasedBuilder if you don't mind",
"closing in favor of #2537 that is already merged"
] | 1,619,089,298,000 | 1,627,306,132,000 | 1,627,306,131,000 | MEMBER | null | Implement instantiation of Dataset from Parquet file. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2247/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2247/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2247",
"html_url": "https://github.com/huggingface/datasets/pull/2247",
"diff_url": "https://github.com/huggingface/datasets/pull/2247.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2247.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2246/comments | https://api.github.com/repos/huggingface/datasets/issues/2246/events | https://github.com/huggingface/datasets/pull/2246 | 864,220,031 | MDExOlB1bGxSZXF1ZXN0NjIwNDg3OTUw | 2,246 | Faster map w/ input_columns & faster slicing w/ Iterable keys | {
"login": "norabelrose",
"id": 39116809,
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norabelrose",
"html_url": "https://github.com/norabelrose",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq Just fixed the code style issues— I think it should be good to merge now :)"
] | 1,619,034,547,000 | 1,619,453,639,000 | 1,619,453,639,000 | CONTRIBUTOR | null | @lhoestq Fixes #2193
- `map` now uses `with_format` to only load needed columns in memory when `input_columns` is set
- Slicing datasets with Iterables of indices now uses a new `Table.fast_gather` method, implemented with `np.searchsorted`, to find the appropriate batch indices all at once. `pa.concat_tables` is no longer used for this; we just call `pa.Table.from_batches` with a list of all the batch slices.
Together these changes have sped up batched `map()` calls over subsets of columns quite considerably in my initial testing. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2246/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2246",
"html_url": "https://github.com/huggingface/datasets/pull/2246",
"diff_url": "https://github.com/huggingface/datasets/pull/2246.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2246.patch",
"merged_at": 1619453638000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2245/comments | https://api.github.com/repos/huggingface/datasets/issues/2245/events | https://github.com/huggingface/datasets/pull/2245 | 863,191,655 | MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3 | 2,245 | Add `key` type and duplicates verification with hashing | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, po... | 1,618,948,999,000 | 1,620,669,877,000 | 1,620,667,882,000 | CONTRIBUTOR | null | Closes #2230
There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`.
This PR is currently a work in progress with the following goals:
- [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash
- [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing
- [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5`
- [x] Creating a function giving a custom error message when non-unique keys are found
**[This will take care of type-checking for keys]**
- [x] Checking for duplicate keys in `writer.write()` for each batch
[**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`]
@lhoestq Thank you for the feedback. It would be great to have your guidance on this! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2245/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2245/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2245",
"html_url": "https://github.com/huggingface/datasets/pull/2245",
"diff_url": "https://github.com/huggingface/datasets/pull/2245.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2245.patch",
"merged_at": 1620667881000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2243/comments | https://api.github.com/repos/huggingface/datasets/issues/2243/events | https://github.com/huggingface/datasets/issues/2243 | 862,909,389 | MDU6SXNzdWU4NjI5MDkzODk= | 2,243 | Map is slow and processes batches one after another | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/followers",
"following_url": "https://api.github.com/users/villmow/following{/other_user}",
"gists_url": "https://api.github.com/users/villmow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/villmow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/villmow/subscriptions",
"organizations_url": "https://api.github.com/users/villmow/orgs",
"repos_url": "https://api.github.com/users/villmow/repos",
"events_url": "https://api.github.com/users/villmow/events{/privacy}",
"received_events_url": "https://api.github.com/users/villmow/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.",
"Hi @albe... | 1,618,930,700,000 | 1,620,064,473,000 | 1,620,064,472,000 | NONE | null | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry.
I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps.
pseudo code:
```python
ds = datasets.load_from_disk("path")
new_dataset = ds.map(work, batched=True, ...) # fast uses all processes
final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another
```
## Expected results
Second stage should be as fast as the first stage.
## Versions
Paste the output of the following code:
- Datasets: 1.5.0
- Python: 3.8.8 (default, Feb 24 2021, 21:46:12)
- Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10
Do you guys have any idea? Thanks a lot! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2243/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2242/comments | https://api.github.com/repos/huggingface/datasets/issues/2242/events | https://github.com/huggingface/datasets/issues/2242 | 862,870,205 | MDU6SXNzdWU4NjI4NzAyMDU= | 2,242 | Link to datasets viwer on Quick Tour page returns "502 Bad Gateway" | {
"login": "martavillegas",
"id": 6735707,
"node_id": "MDQ6VXNlcjY3MzU3MDc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6735707?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/martavillegas",
"html_url": "https://github.com/martavillegas",
"followers_url": "https://api.github.com/users/martavillegas/followers",
"following_url": "https://api.github.com/users/martavillegas/following{/other_user}",
"gists_url": "https://api.github.com/users/martavillegas/gists{/gist_id}",
"starred_url": "https://api.github.com/users/martavillegas/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/martavillegas/subscriptions",
"organizations_url": "https://api.github.com/users/martavillegas/orgs",
"repos_url": "https://api.github.com/users/martavillegas/repos",
"events_url": "https://api.github.com/users/martavillegas/events{/privacy}",
"received_events_url": "https://api.github.com/users/martavillegas/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This should be fixed now!\r\n\r\ncc @srush "
] | 1,618,928,391,000 | 1,618,930,965,000 | 1,618,930,965,000 | NONE | null | Link to datasets viwer (https://huggingface.co/datasets/viewer/) on Quick Tour page (https://huggingface.co/docs/datasets/quicktour.html) returns "502 Bad Gateway"
The same error with https://huggingface.co/datasets/viewer/?dataset=glue&config=mrpc | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2242/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2241/comments | https://api.github.com/repos/huggingface/datasets/issues/2241/events | https://github.com/huggingface/datasets/pull/2241 | 862,696,460 | MDExOlB1bGxSZXF1ZXN0NjE5MjI0MzIw | 2,241 | Add SLR32 to OpenSLR | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> And yet another one ! Thanks a lot :)\r\n\r\nI just hope you don’t get fed up with openslr PR 😊 there are still few other datasets created by google in openslr that is not in hf dataset yet\r\n"
] | 1,618,916,565,000 | 1,619,194,884,000 | 1,619,192,175,000 | CONTRIBUTOR | null | I would like to add SLR32 to OpenSLR. It contains four South African languages: Afrikaans, Sesotho, Setswana and isiXhosa | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2241/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2241",
"html_url": "https://github.com/huggingface/datasets/pull/2241",
"diff_url": "https://github.com/huggingface/datasets/pull/2241.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2241.patch",
"merged_at": 1619192175000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2240/comments | https://api.github.com/repos/huggingface/datasets/issues/2240/events | https://github.com/huggingface/datasets/pull/2240 | 862,537,856 | MDExOlB1bGxSZXF1ZXN0NjE5MDkyODc5 | 2,240 | Clarify how to load wikihow | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,905,778,000 | 1,618,998,897,000 | 1,618,998,897,000 | MEMBER | null | Explain clearer how to load the dataset in the manual download instructions.
En relation with #2239. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2240/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2240",
"html_url": "https://github.com/huggingface/datasets/pull/2240",
"diff_url": "https://github.com/huggingface/datasets/pull/2240.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2240.patch",
"merged_at": 1618998897000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2239/comments | https://api.github.com/repos/huggingface/datasets/issues/2239/events | https://github.com/huggingface/datasets/issues/2239 | 861,904,306 | MDU6SXNzdWU4NjE5MDQzMDY= | 2,239 | Error loading wikihow dataset | {
"login": "odellus",
"id": 4686956,
"node_id": "MDQ6VXNlcjQ2ODY5NTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/4686956?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/odellus",
"html_url": "https://github.com/odellus",
"followers_url": "https://api.github.com/users/odellus/followers",
"following_url": "https://api.github.com/users/odellus/following{/other_user}",
"gists_url": "https://api.github.com/users/odellus/gists{/gist_id}",
"starred_url": "https://api.github.com/users/odellus/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/odellus/subscriptions",
"organizations_url": "https://api.github.com/users/odellus/orgs",
"repos_url": "https://api.github.com/users/odellus/repos",
"events_url": "https://api.github.com/users/odellus/events{/privacy}",
"received_events_url": "https://api.github.com/users/odellus/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @odellus, thanks for reporting.\r\n\r\nThe `wikihow` dataset has 2 versions:\r\n- `all`: Consisting of the concatenation of all paragraphs as the articles and the bold lines as the reference summaries.\r\n- `sep`: Consisting of each paragraph and its summary.\r\n\r\nTherefore, in order to load it, you have to s... | 1,618,866,151,000 | 1,618,936,391,000 | 1,618,936,391,000 | CONTRIBUTOR | null | ## Describe the bug
When attempting to load wikihow into a dataset with
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
I get the message:
```
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
at the end of a [full stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2).
## Steps to reproduce the bug
I have followed the instructions for creating a wikihow dataset. The [wikihow dataset site](https://huggingface.co/datasets/wikihow) says to use
```python
from datasets import load_dataset
dataset = load_dataset('wikihow')
```
to load the dataset. I do so and I get the message
```
AssertionError: The dataset wikihow with config all requires manual data.
Please follow the manual download instructions: You need to manually download two wikihow files. An overview of which files to download can be seen at https://github.com/mahnazkoupaee/WikiHow-Dataset.
You need to download the following two files manually:
1) https://ucsb.app.box.com/s/ap23l8gafpezf4tq3wapr6u8241zz358 and save the file under <path/to/folder>/wikihowAll.csv
2) https://ucsb.app.box.com/s/7yq601ijl1lzvlfu4rjdbbxforzd2oag and save the file under <path/to/folder>/wikihowSep.csv
The <path/to/folder> can e.g. be "~/manual_wikihow_data".
Wikihow can then be loaded using the following command `datasets.load_dataset("wikihow", data_dir="<path/to/folder>")`.
.
Manual data can be loaded with `datasets.load_dataset(wikihow, data_dir='<path/to/manual/data>')
```
So I create a directory `./wikihow` and download `wikihowAll.csv` and `wikihowSep.csv` into the new directory.
Then I run
```python
from datasets import load_dataset
dataset = load_dataset('wikihow', data_dir='./wikihow')
```
that's when I get the [stack trace](https://gist.github.com/odellus/602c3b2de52f541d353b1022f320ffc2)
## Expected results
I expected it to load the downloaded files into a dataset.
## Actual results
```python
Using custom data configuration default-data_dir=.%2Fwikihow
Downloading and preparing dataset wikihow/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/azureuser/.cache/huggingface/datasets/wikihow/default-data_dir=.%2Fwikihow/0.0.0/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2... ---------------------------------------------------------------------------
AttributeError
Traceback (most recent call last)
<ipython-input-9-5e4d40142f30> in <module>
----> 1 dataset = load_dataset('wikihow',data_dir='./wikihow')
~/.local/lib/python3.6/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
745 try_from_hf_gcs=try_from_hf_gcs,
746 base_path=base_path,-->
747 use_auth_token=use_auth_token,
748 )
749
~/.local/lib/python3.6/site-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
577 if not downloaded_from_gcs:
578 self._download_and_prepare( -->
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
581 # Sync info
~/.local/lib/python3.6/site-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
632 split_dict = SplitDict(dataset_name=self.name)
633 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) -->
634 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
635
636 # Checksums verification
~/.cache/huggingface/modules/datasets_modules/datasets/wikihow/58f42f8f0e4d459811a0f69aaab35870093830ccd58006769e7e1eb3e0e686c2/wikihow.py in _split_generators(self, dl_manager)
132
133 path_to_manual_file = os.path.join(
--> 134 os.path.abspath(os.path.expanduser(dl_manager.manual_dir)), self.config.filename
135 )
136
AttributeError: 'BuilderConfig' object has no attribute 'filename'
```
## Versions
Paste the output of the following code:
```python
import datasets
import sys
import platform
print(f"""
- Datasets: {datasets.__version__}
- Python: {sys.version}
- Platform: {platform.platform()}
""")
```
```
- Datasets: 1.5.0
- Python: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
- Platform: Linux-5.4.0-1046-azure-x86_64-with-Ubuntu-18.04-bionic
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2239/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2238/comments | https://api.github.com/repos/huggingface/datasets/issues/2238/events | https://github.com/huggingface/datasets/pull/2238 | 861,518,291 | MDExOlB1bGxSZXF1ZXN0NjE4MTY5NzM5 | 2,238 | NLU evaluation data | {
"login": "dkajtoch",
"id": 32985207,
"node_id": "MDQ6VXNlcjMyOTg1MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/32985207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dkajtoch",
"html_url": "https://github.com/dkajtoch",
"followers_url": "https://api.github.com/users/dkajtoch/followers",
"following_url": "https://api.github.com/users/dkajtoch/following{/other_user}",
"gists_url": "https://api.github.com/users/dkajtoch/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dkajtoch/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dkajtoch/subscriptions",
"organizations_url": "https://api.github.com/users/dkajtoch/orgs",
"repos_url": "https://api.github.com/users/dkajtoch/repos",
"events_url": "https://api.github.com/users/dkajtoch/events{/privacy}",
"received_events_url": "https://api.github.com/users/dkajtoch/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,850,840,000 | 1,619,191,925,000 | 1,619,191,925,000 | CONTRIBUTOR | null | New intent classification dataset from https://github.com/xliuhw/NLU-Evaluation-Data | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2238/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2238",
"html_url": "https://github.com/huggingface/datasets/pull/2238",
"diff_url": "https://github.com/huggingface/datasets/pull/2238.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2238.patch",
"merged_at": 1619191925000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2235/comments | https://api.github.com/repos/huggingface/datasets/issues/2235/events | https://github.com/huggingface/datasets/pull/2235 | 861,040,716 | MDExOlB1bGxSZXF1ZXN0NjE3Nzc0NDUw | 2,235 | Update README.md | {
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,820,462,000 | 1,618,836,559,000 | 1,618,836,559,000 | CONTRIBUTOR | null | Adding relevant citations (paper accepted at AAAI 2020 & EMNLP 2020) to the benchmark | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2235/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2235",
"html_url": "https://github.com/huggingface/datasets/pull/2235",
"diff_url": "https://github.com/huggingface/datasets/pull/2235.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2235.patch",
"merged_at": 1618836559000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2234/comments | https://api.github.com/repos/huggingface/datasets/issues/2234/events | https://github.com/huggingface/datasets/pull/2234 | 860,442,246 | MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3 | 2,234 | Fix bash snippet formatting in ADD_NEW_DATASET.md | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,675,268,000 | 1,618,829,851,000 | 1,618,818,696,000 | CONTRIBUTOR | null | This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2234/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2234/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2234",
"html_url": "https://github.com/huggingface/datasets/pull/2234",
"diff_url": "https://github.com/huggingface/datasets/pull/2234.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2234.patch",
"merged_at": 1618818696000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2233/comments | https://api.github.com/repos/huggingface/datasets/issues/2233/events | https://github.com/huggingface/datasets/pull/2233 | 860,097,084 | MDExOlB1bGxSZXF1ZXN0NjE3MDYwMTkw | 2,233 | Fix `xnli` dataset tuple key | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,600,362,000 | 1,618,822,602,000 | 1,618,822,602,000 | CONTRIBUTOR | null | Closes #2229
The `xnli` dataset yields a tuple key in case of `ar` which is inconsistant with the acceptable key types (str/int).
The key was thus ported to `str` keeping the original information intact. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2233/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2233",
"html_url": "https://github.com/huggingface/datasets/pull/2233",
"diff_url": "https://github.com/huggingface/datasets/pull/2233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2233.patch",
"merged_at": 1618822602000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2232/comments | https://api.github.com/repos/huggingface/datasets/issues/2232/events | https://github.com/huggingface/datasets/pull/2232 | 860,075,931 | MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4 | 2,232 | Start filling GLUE dataset card | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I replaced all the \"we\" and applied your suggestion",
"Merging this for now, we can continue improving this card in other PRs :)"
] | 1,618,598,257,000 | 1,618,997,589,000 | 1,618,997,588,000 | MEMBER | null | The dataset card was pretty much empty.
I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks.
cc @sgugger | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2232/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2232",
"html_url": "https://github.com/huggingface/datasets/pull/2232",
"diff_url": "https://github.com/huggingface/datasets/pull/2232.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2232.patch",
"merged_at": 1618997588000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2231/comments | https://api.github.com/repos/huggingface/datasets/issues/2231/events | https://github.com/huggingface/datasets/pull/2231 | 859,850,488 | MDExOlB1bGxSZXF1ZXN0NjE2ODYyNTEx | 2,231 | Fix map when removing columns on a formatted dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,582,135,000 | 1,618,585,805,000 | 1,618,585,804,000 | MEMBER | null | This should fix issue #2226
The `remove_columns` argument was ignored on formatted datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2231/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2231",
"html_url": "https://github.com/huggingface/datasets/pull/2231",
"diff_url": "https://github.com/huggingface/datasets/pull/2231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2231.patch",
"merged_at": 1618585804000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2230/comments | https://api.github.com/repos/huggingface/datasets/issues/2230/events | https://github.com/huggingface/datasets/issues/2230 | 859,817,159 | MDU6SXNzdWU4NTk4MTcxNTk= | 2,230 | Keys yielded while generating dataset are not being checked | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?",
"Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how... | 1,618,579,787,000 | 1,620,667,881,000 | 1,620,667,881,000 | CONTRIBUTOR | null | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Even after having a tuple as key, the dataset is generated without any warning.
Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):
```
>>> import datasets
>>> nik = datasets.load_dataset('anli')
Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...
0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}
2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''}
1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''}
1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}
1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}
```
Here also, the dataset was generated successfuly even hough it had same keys without any warning.
The reason appears to stem from here:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988
Here, although it has access to every key, but it is not being checked and the example is written directly:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992
I would like to take this issue if you allow me. Thank You! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2230/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2229/comments | https://api.github.com/repos/huggingface/datasets/issues/2229/events | https://github.com/huggingface/datasets/issues/2229 | 859,810,602 | MDU6SXNzdWU4NTk4MTA2MDI= | 2,229 | `xnli` dataset creating a tuple key while yielding instead of `str` or `int` | {
"login": "NikhilBartwal",
"id": 42388668,
"node_id": "MDQ6VXNlcjQyMzg4NjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NikhilBartwal",
"html_url": "https://github.com/NikhilBartwal",
"followers_url": "https://api.github.com/users/NikhilBartwal/followers",
"following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}",
"gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions",
"organizations_url": "https://api.github.com/users/NikhilBartwal/orgs",
"repos_url": "https://api.github.com/users/NikhilBartwal/repos",
"events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}",
"received_events_url": "https://api.github.com/users/NikhilBartwal/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)",
"@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!"
] | 1,618,579,313,000 | 1,618,822,602,000 | 1,618,822,602,000 | CONTRIBUTOR | null | When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset.
I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2229/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2227/comments | https://api.github.com/repos/huggingface/datasets/issues/2227/events | https://github.com/huggingface/datasets/pull/2227 | 859,771,526 | MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx | 2,227 | Use update_metadata_with_features decorator in class_encode_column method | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,576,301,000 | 1,618,580,980,000 | 1,618,580,979,000 | CONTRIBUTOR | null | Following @mariosasko 's comment | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2227/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2227/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2227",
"html_url": "https://github.com/huggingface/datasets/pull/2227",
"diff_url": "https://github.com/huggingface/datasets/pull/2227.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2227.patch",
"merged_at": 1618580979000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2225/comments | https://api.github.com/repos/huggingface/datasets/issues/2225/events | https://github.com/huggingface/datasets/pull/2225 | 858,469,561 | MDExOlB1bGxSZXF1ZXN0NjE1NzAzMTY4 | 2,225 | fixed one instance of 'train' to 'test' | {
"login": "alexwdong",
"id": 46733535,
"node_id": "MDQ6VXNlcjQ2NzMzNTM1",
"avatar_url": "https://avatars.githubusercontent.com/u/46733535?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexwdong",
"html_url": "https://github.com/alexwdong",
"followers_url": "https://api.github.com/users/alexwdong/followers",
"following_url": "https://api.github.com/users/alexwdong/following{/other_user}",
"gists_url": "https://api.github.com/users/alexwdong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/alexwdong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/alexwdong/subscriptions",
"organizations_url": "https://api.github.com/users/alexwdong/orgs",
"repos_url": "https://api.github.com/users/alexwdong/repos",
"events_url": "https://api.github.com/users/alexwdong/events{/privacy}",
"received_events_url": "https://api.github.com/users/alexwdong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks ! good catch\r\n\r\nCould you also update the metadata of this dataset ?\r\nYou can do so by running\r\n```\r\ndatasets-cli test ./datasets/newsgroup --all_configs --save_infos --ignore_verifications\r\n```\r\nThis should update the dataset_infos.json file that contains the size of all the splits for exampl... | 1,618,460,800,000 | 1,618,524,590,000 | 1,618,521,549,000 | CONTRIBUTOR | null | I believe this should be 'test' instead of 'train' | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2225/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2225/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2225",
"html_url": "https://github.com/huggingface/datasets/pull/2225",
"diff_url": "https://github.com/huggingface/datasets/pull/2225.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2225.patch",
"merged_at": 1618521549000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2223/comments | https://api.github.com/repos/huggingface/datasets/issues/2223/events | https://github.com/huggingface/datasets/pull/2223 | 857,870,800 | MDExOlB1bGxSZXF1ZXN0NjE1MjE4MDIz | 2,223 | Set test cache config | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> why a cache dir per test function does not work?\r\n\r\nProbably because we end up with multiple `datasets_module` in the python path. This breaks the import of all the datasets/metrics modules.\r\nIf you want to use one modules cache per test, you may need remove the `datasets_module` that was added to the pyt... | 1,618,404,924,000 | 1,618,513,885,000 | 1,618,513,885,000 | MEMBER | null | Currently, running the tests populates the default cache directory `"~/.cache"`.
This PR monkey-patches the config to set the cache directory within the temporary test directory, avoiding side effects. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2223/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2223",
"html_url": "https://github.com/huggingface/datasets/pull/2223",
"diff_url": "https://github.com/huggingface/datasets/pull/2223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2223.patch",
"merged_at": 1618513885000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2222/comments | https://api.github.com/repos/huggingface/datasets/issues/2222/events | https://github.com/huggingface/datasets/pull/2222 | 857,847,231 | MDExOlB1bGxSZXF1ZXN0NjE1MTk5MTM5 | 2,222 | Fix too long WindowsFileLock name | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"Windows users should disable the max path length limit. It's a nightmare to handle it.\r\nAlso the lock path must not be changed in a random way. Otherwise from another process the lock path might not be the same and the locking mechanism won't work.",
"Do you agree with handling the case where MAX_PATH is not d... | 1,618,403,212,000 | 1,618,412,425,000 | 1,618,411,579,000 | MEMBER | null | Fix WindowsFileLock name longer than allowed MAX_PATH by shortening the basename. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2222/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2222",
"html_url": "https://github.com/huggingface/datasets/pull/2222",
"diff_url": "https://github.com/huggingface/datasets/pull/2222.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2222.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2221/comments | https://api.github.com/repos/huggingface/datasets/issues/2221/events | https://github.com/huggingface/datasets/pull/2221 | 857,833,770 | MDExOlB1bGxSZXF1ZXN0NjE1MTg4MTE5 | 2,221 | Add SLR70 - SLR80 and SLR86 to OpenSLR dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,402,158,000 | 1,618,408,219,000 | 1,618,408,219,000 | CONTRIBUTOR | null | I would like to add SLR70, SLR71, SLR72, SLR73, SLR74, SLR75, SLR76, SLR77, SLR78, SLR79, SLR80 and SLR86 to OpenSLR dataset. The languages are:
Nigerian English, Chilean Spanish, Columbian Spanish, Peruvian Spanish, Puerto Rico Spanish, Venezuelan Spanish, Basque, Galician, Gujarati and Kannada. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2221/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2221",
"html_url": "https://github.com/huggingface/datasets/pull/2221",
"diff_url": "https://github.com/huggingface/datasets/pull/2221.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2221.patch",
"merged_at": 1618408219000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2220/comments | https://api.github.com/repos/huggingface/datasets/issues/2220/events | https://github.com/huggingface/datasets/pull/2220 | 857,774,626 | MDExOlB1bGxSZXF1ZXN0NjE1MTM4NDQz | 2,220 | Fix infinite loop in WindowsFileLock | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] | closed | false | null | [] | null | [
"How is it possible to get an infinite loop ? Can you add more details ?",
"Yes, in Windows, if the filename is too long, a `FileNotFoundError` is raised. The exception should be raised in this case. Otherwise, we get into an infinite loop.\r\n\r\nIf other process has the file locked, then `PermissionError` is ra... | 1,618,397,398,000 | 1,618,412,390,000 | 1,618,412,374,000 | MEMBER | null | Raise exception to avoid infinite loop. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2220/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2220",
"html_url": "https://github.com/huggingface/datasets/pull/2220",
"diff_url": "https://github.com/huggingface/datasets/pull/2220.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2220.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2219/comments | https://api.github.com/repos/huggingface/datasets/issues/2219/events | https://github.com/huggingface/datasets/pull/2219 | 857,321,242 | MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3 | 2,219 | Added CUAD dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while",
... | 1,618,347,903,000 | 1,619,274,351,000 | 1,618,563,044,000 | CONTRIBUTOR | null | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2219/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2219",
"html_url": "https://github.com/huggingface/datasets/pull/2219",
"diff_url": "https://github.com/huggingface/datasets/pull/2219.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2219.patch",
"merged_at": 1618563044000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2217/comments | https://api.github.com/repos/huggingface/datasets/issues/2217/events | https://github.com/huggingface/datasets/pull/2217 | 857,011,314 | MDExOlB1bGxSZXF1ZXN0NjE0NTAxNjIz | 2,217 | Revert breaking change in cache_files property | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,323,604,000 | 1,618,410,264,000 | 1,618,410,263,000 | MEMBER | null | #2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting the format back to a list of dicts:
```python
[{"filename": "path/to/file.arrow"}]
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2217/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2217",
"html_url": "https://github.com/huggingface/datasets/pull/2217",
"diff_url": "https://github.com/huggingface/datasets/pull/2217.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2217.patch",
"merged_at": 1618410263000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2216/comments | https://api.github.com/repos/huggingface/datasets/issues/2216/events | https://github.com/huggingface/datasets/pull/2216 | 856,955,534 | MDExOlB1bGxSZXF1ZXN0NjE0NDU0MjE1 | 2,216 | added real label for glue/mrpc to test set | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,320,020,000 | 1,618,322,000,000 | 1,618,321,999,000 | MEMBER | null | Added real label to `glue.py` `mrpc` task for test split. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2216/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2216/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2216",
"html_url": "https://github.com/huggingface/datasets/pull/2216",
"diff_url": "https://github.com/huggingface/datasets/pull/2216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2216.patch",
"merged_at": 1618321999000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2215/comments | https://api.github.com/repos/huggingface/datasets/issues/2215/events | https://github.com/huggingface/datasets/pull/2215 | 856,716,791 | MDExOlB1bGxSZXF1ZXN0NjE0MjUyNTEy | 2,215 | Add datasets SLR35 and SLR36 to OpenSLR | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq,\r\nCould you please help me, I got this error message in all \"ci/circleci: run_dataset_script_tests_pyarrow*\" tests:\r\n```\r\n...\r\n \"\"\"Wrapper classes for various types of tokenization.\"\"\"\r\n \r\n from bleurt.lib import bert_tokenization\r\n import tensorflow.compat.v1 as tf\r\... | 1,618,302,247,000 | 1,618,322,714,000 | 1,618,322,714,000 | CONTRIBUTOR | null | I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2215/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2215/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2215",
"html_url": "https://github.com/huggingface/datasets/pull/2215",
"diff_url": "https://github.com/huggingface/datasets/pull/2215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2215.patch",
"merged_at": 1618322714000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2214/comments | https://api.github.com/repos/huggingface/datasets/issues/2214/events | https://github.com/huggingface/datasets/issues/2214 | 856,333,657 | MDU6SXNzdWU4NTYzMzM2NTc= | 2,214 | load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | {
"login": "nsaphra",
"id": 414788,
"node_id": "MDQ6VXNlcjQxNDc4OA==",
"avatar_url": "https://avatars.githubusercontent.com/u/414788?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsaphra",
"html_url": "https://github.com/nsaphra",
"followers_url": "https://api.github.com/users/nsaphra/followers",
"following_url": "https://api.github.com/users/nsaphra/following{/other_user}",
"gists_url": "https://api.github.com/users/nsaphra/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nsaphra/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nsaphra/subscriptions",
"organizations_url": "https://api.github.com/users/nsaphra/orgs",
"repos_url": "https://api.github.com/users/nsaphra/repos",
"events_url": "https://api.github.com/users/nsaphra/events{/privacy}",
"received_events_url": "https://api.github.com/users/nsaphra/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```",
"There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are ... | 1,618,259,161,000 | 1,619,191,202,000 | 1,619,191,202,000 | NONE | null | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 502, in load_metric
File "/ext3/miniconda3/lib/python3.8/site-packages/datasets-1.2.1-py3.8.egg/datasets/load.py", line 66, in import_main_class
File "/ext3/miniconda3/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 783, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/ns4008/.cache/huggingface/modules/datasets_modules/metrics/glue/e4606ab9804a36bcd5a9cebb2cb65bb14b6ac78ee9e6d5981fa679a495dd55de/glue.py", line 105, in <module>
@datasets.utils.file_utils.add_start_docstrings(_DESCRIPTION, _KWARGS_DESCRIPTION)
AttributeError: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2214/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2213/comments | https://api.github.com/repos/huggingface/datasets/issues/2213/events | https://github.com/huggingface/datasets/pull/2213 | 856,025,320 | MDExOlB1bGxSZXF1ZXN0NjEzNjcwODk2 | 2,213 | Fix lc_quad download checksum | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,237,019,000 | 1,618,437,894,000 | 1,618,407,745,000 | CONTRIBUTOR | null | Fixes #2211 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2213/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2213",
"html_url": "https://github.com/huggingface/datasets/pull/2213",
"diff_url": "https://github.com/huggingface/datasets/pull/2213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2213.patch",
"merged_at": 1618407745000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2211/comments | https://api.github.com/repos/huggingface/datasets/issues/2211/events | https://github.com/huggingface/datasets/issues/2211 | 855,988,410 | MDU6SXNzdWU4NTU5ODg0MTA= | 2,211 | Getting checksum error when trying to load lc_quad dataset | {
"login": "hanss0n",
"id": 21348833,
"node_id": "MDQ6VXNlcjIxMzQ4ODMz",
"avatar_url": "https://avatars.githubusercontent.com/u/21348833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hanss0n",
"html_url": "https://github.com/hanss0n",
"followers_url": "https://api.github.com/users/hanss0n/followers",
"following_url": "https://api.github.com/users/hanss0n/following{/other_user}",
"gists_url": "https://api.github.com/users/hanss0n/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hanss0n/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hanss0n/subscriptions",
"organizations_url": "https://api.github.com/users/hanss0n/orgs",
"repos_url": "https://api.github.com/users/hanss0n/repos",
"events_url": "https://api.github.com/users/hanss0n/events{/privacy}",
"received_events_url": "https://api.github.com/users/hanss0n/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n",
"Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you... | 1,618,234,738,000 | 1,618,407,745,000 | 1,618,407,745,000 | NONE | null | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, generated: 19.77 MiB, post-processed: Unknown size, total: 23.46 MiB) to /root/.cache/huggingface/datasets/lc_quad/default/2.0.0/5a98fe174603f5dec6df07edf1c2b4d2317210d2ad61f5a393839bca4d64e5a7...
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-42-404ace83f73c> in <module>()
----> 1 lc_quad = load_dataset("lc_quad")
3 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
37 if len(bad_urls) > 0:
38 error_msg = "Checksums didn't match" + for_verification_name + ":\n"
---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls))
40 logger.info("All the checksums matched successfully" + for_verification_name)
41
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/AskNowQA/LC-QuAD2.0/archive/master.zip']
```
Does anyone know why this could be and how I fix it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2211/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] | 1,618,216,382,000 | 1,618,279,385,000 | 1,618,279,385,000 | NONE | null | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2209/comments | https://api.github.com/repos/huggingface/datasets/issues/2209/events | https://github.com/huggingface/datasets/pull/2209 | 855,638,232 | MDExOlB1bGxSZXF1ZXN0NjEzMzQwMTI2 | 2,209 | Add code of conduct to the project | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | [] | 1,618,211,774,000 | 1,618,250,152,000 | 1,618,250,152,000 | MEMBER | null | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2209/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2209",
"html_url": "https://github.com/huggingface/datasets/pull/2209",
"diff_url": "https://github.com/huggingface/datasets/pull/2209.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2209.patch",
"merged_at": 1618250152000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2208/comments | https://api.github.com/repos/huggingface/datasets/issues/2208/events | https://github.com/huggingface/datasets/pull/2208 | 855,343,835 | MDExOlB1bGxSZXF1ZXN0NjEzMTAxMzMw | 2,208 | Remove Python2 leftovers | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"merging since the CI is fixed on master"
] | 1,618,157,283,000 | 1,618,437,936,000 | 1,618,407,651,000 | CONTRIBUTOR | null | This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2208/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2208/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2208",
"html_url": "https://github.com/huggingface/datasets/pull/2208",
"diff_url": "https://github.com/huggingface/datasets/pull/2208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2208.patch",
"merged_at": 1618407650000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | {
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assume... | 1,618,130,409,000 | 1,636,546,710,000 | 1,636,545,868,000 | NONE | null | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2205/comments | https://api.github.com/repos/huggingface/datasets/issues/2205/events | https://github.com/huggingface/datasets/pull/2205 | 855,207,605 | MDExOlB1bGxSZXF1ZXN0NjEzMDAwMzYw | 2,205 | Updating citation information on LinCE readme | {
"login": "gaguilar",
"id": 5833357,
"node_id": "MDQ6VXNlcjU4MzMzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5833357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gaguilar",
"html_url": "https://github.com/gaguilar",
"followers_url": "https://api.github.com/users/gaguilar/followers",
"following_url": "https://api.github.com/users/gaguilar/following{/other_user}",
"gists_url": "https://api.github.com/users/gaguilar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gaguilar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gaguilar/subscriptions",
"organizations_url": "https://api.github.com/users/gaguilar/orgs",
"repos_url": "https://api.github.com/users/gaguilar/repos",
"events_url": "https://api.github.com/users/gaguilar/events{/privacy}",
"received_events_url": "https://api.github.com/users/gaguilar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,111,085,000 | 1,618,250,014,000 | 1,618,250,014,000 | CONTRIBUTOR | null | Hi!
I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2205/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2205/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2205",
"html_url": "https://github.com/huggingface/datasets/pull/2205",
"diff_url": "https://github.com/huggingface/datasets/pull/2205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2205.patch",
"merged_at": 1618250014000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2204/comments | https://api.github.com/repos/huggingface/datasets/issues/2204/events | https://github.com/huggingface/datasets/pull/2204 | 855,144,431 | MDExOlB1bGxSZXF1ZXN0NjEyOTU1MzM2 | 2,204 | Add configurable options to `seqeval` metric | {
"login": "marrodion",
"id": 44571847,
"node_id": "MDQ6VXNlcjQ0NTcxODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/marrodion",
"html_url": "https://github.com/marrodion",
"followers_url": "https://api.github.com/users/marrodion/followers",
"following_url": "https://api.github.com/users/marrodion/following{/other_user}",
"gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}",
"starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/marrodion/subscriptions",
"organizations_url": "https://api.github.com/users/marrodion/orgs",
"repos_url": "https://api.github.com/users/marrodion/repos",
"events_url": "https://api.github.com/users/marrodion/events{/privacy}",
"received_events_url": "https://api.github.com/users/marrodion/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,618,084,699,000 | 1,618,494,586,000 | 1,618,494,586,000 | CONTRIBUTOR | null | Fixes #2148
Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.
`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea). | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2204/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2204/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2204",
"html_url": "https://github.com/huggingface/datasets/pull/2204",
"diff_url": "https://github.com/huggingface/datasets/pull/2204.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2204.patch",
"merged_at": 1618494586000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2203/comments | https://api.github.com/repos/huggingface/datasets/issues/2203/events | https://github.com/huggingface/datasets/pull/2203 | 855,053,595 | MDExOlB1bGxSZXF1ZXN0NjEyODg4MzA5 | 2,203 | updated banking77 train and test data | {
"login": "hsali",
"id": 6765330,
"node_id": "MDQ6VXNlcjY3NjUzMzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6765330?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hsali",
"html_url": "https://github.com/hsali",
"followers_url": "https://api.github.com/users/hsali/followers",
"following_url": "https://api.github.com/users/hsali/following{/other_user}",
"gists_url": "https://api.github.com/users/hsali/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hsali/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hsali/subscriptions",
"organizations_url": "https://api.github.com/users/hsali/orgs",
"repos_url": "https://api.github.com/users/hsali/repos",
"events_url": "https://api.github.com/users/hsali/events{/privacy}",
"received_events_url": "https://api.github.com/users/hsali/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?",
"Closing for inactivity. Feel free to re-open if you want to push this change"
] | 1,618,056,610,000 | 1,619,188,419,000 | 1,619,188,419,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2203/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2203",
"html_url": "https://github.com/huggingface/datasets/pull/2203",
"diff_url": "https://github.com/huggingface/datasets/pull/2203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2203.patch",
"merged_at": null
} | true | |
https://api.github.com/repos/huggingface/datasets/issues/2202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2202/comments | https://api.github.com/repos/huggingface/datasets/issues/2202/events | https://github.com/huggingface/datasets/pull/2202 | 854,501,109 | MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx | 2,202 | Add classes GenerateMode, DownloadConfig and Version to the documentation | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,973,099,000 | 1,618,250,280,000 | 1,618,250,279,000 | MEMBER | null | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2202/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2202",
"html_url": "https://github.com/huggingface/datasets/pull/2202",
"diff_url": "https://github.com/huggingface/datasets/pull/2202.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2202.patch",
"merged_at": 1618250279000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2201/comments | https://api.github.com/repos/huggingface/datasets/issues/2201/events | https://github.com/huggingface/datasets/pull/2201 | 854,499,563 | MDExOlB1bGxSZXF1ZXN0NjEyNDM1NTE3 | 2,201 | Fix ArrowWriter overwriting features in ArrowBasedBuilder | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,972,979,000 | 1,618,234,337,000 | 1,618,234,336,000 | MEMBER | null | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the features from the arrow data, discarding the features passed by the user.
I fixed that and I updated the tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2201/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2201/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"merged_at": 1618234336000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2200/comments | https://api.github.com/repos/huggingface/datasets/issues/2200/events | https://github.com/huggingface/datasets/issues/2200 | 854,449,656 | MDU6SXNzdWU4NTQ0NDk2NTY= | 2,200 | _prepare_split will overwrite DatasetBuilder.info.features | {
"login": "Gforky",
"id": 4157614,
"node_id": "MDQ6VXNlcjQxNTc2MTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4157614?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gforky",
"html_url": "https://github.com/Gforky",
"followers_url": "https://api.github.com/users/Gforky/followers",
"following_url": "https://api.github.com/users/Gforky/following{/other_user}",
"gists_url": "https://api.github.com/users/Gforky/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gforky/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gforky/subscriptions",
"organizations_url": "https://api.github.com/users/Gforky/orgs",
"repos_url": "https://api.github.com/users/Gforky/repos",
"events_url": "https://api.github.com/users/Gforky/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gforky/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201",
"> Hi ! This might be related to #2153\r\n> \r\n> Yo... | 1,617,968,833,000 | 1,622,803,055,000 | 1,622,803,055,000 | NONE | null | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if data_args.num_features:
features.update({text_feature: hf_features.Value("float32") for text_feature in data_args.num_features.strip().split(",")})
if data_args.label_classes:
features["label"] = hf_features.ClassLabel(names=data_args.label_classes.strip().split(","))
else:
features["label"] = hf_features.Value("float32")
return hf_features.Features(features)
datasets = load_dataset(extension,
data_files=data_files,
sep=data_args.delimiter,
header=data_args.header,
column_names=data_args.column_names.split(",") if data_args.column_names else None,
features=get_dataset_features(data_args=data_args))
```
The `features` is printout as below before `builder_instance.as_dataset` is called:
```
{'label': ClassLabel(num_classes=2, names=['unacceptable', 'acceptable'], names_file=None, id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
````
But after the `builder_instance.as_dataset` is called for Csv dataset builder, the `features` is changed to:
```
{'label': Value(dtype='int64', id=None), 'notated': Value(dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'src_code': Value(dtype='string', id=None)}
```
After digged into the code, I releazed that in `ArrowBasedBuilder._prepare_split`, the DatasetBuilder's info's features will be overwrited by `ArrowWriter`'s `_features`.
But `ArrowWriter` is initailized without passing `features`.
So my concern is:
It's this overwrite must be done, or, should it be an option to pass features in `_prepare_split` function? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2200/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2199/comments | https://api.github.com/repos/huggingface/datasets/issues/2199/events | https://github.com/huggingface/datasets/pull/2199 | 854,417,318 | MDExOlB1bGxSZXF1ZXN0NjEyMzY0ODU3 | 2,199 | Fix backward compatibility in Dataset.load_from_disk | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @lhoestq, could you please check if this makes sense? Thanks.",
"What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released... | 1,617,966,070,000 | 1,617,983,825,000 | 1,617,983,825,000 | MEMBER | null | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2199/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2199/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2199",
"html_url": "https://github.com/huggingface/datasets/pull/2199",
"diff_url": "https://github.com/huggingface/datasets/pull/2199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2199.patch",
"merged_at": 1617983825000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2198/comments | https://api.github.com/repos/huggingface/datasets/issues/2198/events | https://github.com/huggingface/datasets/pull/2198 | 854,357,481 | MDExOlB1bGxSZXF1ZXN0NjEyMzE0MTIz | 2,198 | added file_permission in load_dataset | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.github.com/users/bhavitvyamalik/followers",
"following_url": "https://api.github.com/users/bhavitvyamalik/following{/other_user}",
"gists_url": "https://api.github.com/users/bhavitvyamalik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhavitvyamalik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhavitvyamalik/subscriptions",
"organizations_url": "https://api.github.com/users/bhavitvyamalik/orgs",
"repos_url": "https://api.github.com/users/bhavitvyamalik/repos",
"events_url": "https://api.github.com/users/bhavitvyamalik/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhavitvyamalik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evol... | 1,617,961,146,000 | 1,618,582,306,000 | 1,618,582,306,000 | CONTRIBUTOR | null | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on to generate another cache file of dataset, it ensures the permissions of newly generated file are similar to that of` *-train.arrow` file inside cache_dir for that dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2198/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2198",
"html_url": "https://github.com/huggingface/datasets/pull/2198",
"diff_url": "https://github.com/huggingface/datasets/pull/2198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2198.patch",
"merged_at": null
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2197/comments | https://api.github.com/repos/huggingface/datasets/issues/2197/events | https://github.com/huggingface/datasets/pull/2197 | 854,356,559 | MDExOlB1bGxSZXF1ZXN0NjEyMzEzMzQw | 2,197 | fix missing indices_files in load_form_disk | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,961,077,000 | 1,617,962,080,000 | 1,617,962,079,000 | MEMBER | null | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2197/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2197/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2197",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"merged_at": 1617962079000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2196/comments | https://api.github.com/repos/huggingface/datasets/issues/2196/events | https://github.com/huggingface/datasets/issues/2196 | 854,126,114 | MDU6SXNzdWU4NTQxMjYxMTQ= | 2,196 | `load_dataset` caches two arrow files? | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid havi... | 1,617,940,159,000 | 1,618,205,129,000 | 1,618,205,129,000 | NONE | null | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2196/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2195/comments | https://api.github.com/repos/huggingface/datasets/issues/2195/events | https://github.com/huggingface/datasets/issues/2195 | 854,070,194 | MDU6SXNzdWU4NTQwNzAxOTQ= | 2,195 | KeyError: '_indices_files' in `arrow_dataset.py` | {
"login": "samsontmr",
"id": 15007950,
"node_id": "MDQ6VXNlcjE1MDA3OTUw",
"avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samsontmr",
"html_url": "https://github.com/samsontmr",
"followers_url": "https://api.github.com/users/samsontmr/followers",
"following_url": "https://api.github.com/users/samsontmr/following{/other_user}",
"gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions",
"organizations_url": "https://api.github.com/users/samsontmr/orgs",
"repos_url": "https://api.github.com/users/samsontmr/repos",
"events_url": "https://api.github.com/users/samsontmr/events{/privacy}",
"received_events_url": "https://api.github.com/users/samsontmr/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...",
"Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues"
] | 1,617,932,232,000 | 1,617,962,109,000 | 1,617,962,079,000 | NONE | null | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk
return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk
dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk
if state["_indices_files"]:
KeyError: '_indices_files'
```
I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions:
https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634
May I suggest using `state.get()` instead of directly indexing the dictionary?
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2195/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n"
] | 1,617,915,768,000 | 1,617,987,410,000 | 1,617,933,177,000 | CONTRIBUTOR | null | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | {
"login": "norabelrose",
"id": 39116809,
"node_id": "MDQ6VXNlcjM5MTE2ODA5",
"avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/norabelrose",
"html_url": "https://github.com/norabelrose",
"followers_url": "https://api.github.com/users/norabelrose/followers",
"following_url": "https://api.github.com/users/norabelrose/following{/other_user}",
"gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}",
"starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions",
"organizations_url": "https://api.github.com/users/norabelrose/orgs",
"repos_url": "https://api.github.com/users/norabelrose/repos",
"events_url": "https://api.github.com/users/norabelrose/events{/privacy}",
"received_events_url": "https://api.github.com/users/norabelrose/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] | closed | false | null | [] | null | [
"Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoi... | 1,617,905,774,000 | 1,619,453,639,000 | 1,619,453,639,000 | CONTRIBUTOR | null | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API.
I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset.
PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2192/comments | https://api.github.com/repos/huggingface/datasets/issues/2192/events | https://github.com/huggingface/datasets/pull/2192 | 853,547,910 | MDExOlB1bGxSZXF1ZXN0NjExNjE5NTY0 | 2,192 | Fix typo in huggingface hub | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/users/LysandreJik/followers",
"following_url": "https://api.github.com/users/LysandreJik/following{/other_user}",
"gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions",
"organizations_url": "https://api.github.com/users/LysandreJik/orgs",
"repos_url": "https://api.github.com/users/LysandreJik/repos",
"events_url": "https://api.github.com/users/LysandreJik/events{/privacy}",
"received_events_url": "https://api.github.com/users/LysandreJik/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,892,944,000 | 1,617,896,861,000 | 1,617,896,860,000 | MEMBER | null | pip knows how to resolve to `huggingface_hub`, but conda doesn't!
The `packaging` dependency is also required for the build to complete. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2192/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2192",
"html_url": "https://github.com/huggingface/datasets/pull/2192",
"diff_url": "https://github.com/huggingface/datasets/pull/2192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2192.patch",
"merged_at": 1617896860000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2191/comments | https://api.github.com/repos/huggingface/datasets/issues/2191/events | https://github.com/huggingface/datasets/pull/2191 | 853,364,204 | MDExOlB1bGxSZXF1ZXN0NjExNDY1Nzc0 | 2,191 | Refactorize tests to use Dataset as context manager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 2851292821,
"node_id": "MDU6TGFiZWwyODUxMjkyODIx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/refactoring",
"name": "refactoring",
"color": "B67A40",
"default": false,
"description": "Restructuring existing code without changing its external behavior"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"id": 6644198,
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"title": "1.6",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 4,
"state": "closed",
"created_at": 1617973671000,
"updated_at": 1618937446000,
"due_on": 1618556400000,
"closed_at": 1618937446000
} | [
"I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.",
"@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other ... | 1,617,880,864,000 | 1,618,818,791,000 | 1,618,818,790,000 | MEMBER | null | Refactorize Dataset tests to use Dataset as context manager. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2191/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2191/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2191",
"html_url": "https://github.com/huggingface/datasets/pull/2191",
"diff_url": "https://github.com/huggingface/datasets/pull/2191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2191.patch",
"merged_at": 1618818790000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2190/comments | https://api.github.com/repos/huggingface/datasets/issues/2190/events | https://github.com/huggingface/datasets/issues/2190 | 853,181,564 | MDU6SXNzdWU4NTMxODE1NjQ= | 2,190 | News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | {
"login": "anassalamah",
"id": 8571003,
"node_id": "MDQ6VXNlcjg1NzEwMDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anassalamah",
"html_url": "https://github.com/anassalamah",
"followers_url": "https://api.github.com/users/anassalamah/followers",
"following_url": "https://api.github.com/users/anassalamah/following{/other_user}",
"gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}",
"starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions",
"organizations_url": "https://api.github.com/users/anassalamah/orgs",
"repos_url": "https://api.github.com/users/anassalamah/repos",
"events_url": "https://api.github.com/users/anassalamah/events{/privacy}",
"received_events_url": "https://api.github.com/users/anassalamah/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```",
"Hello @albertvillanova, \r\n\r\nThanks for... | 1,617,868,423,000 | 1,621,850,635,000 | 1,621,850,635,000 | NONE | null | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that are not ar-en translations but ar-hi
val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True)
```
* I'm fairly new to using datasets so I might be doing something wrong | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2190/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | {
"login": "BHM-RB",
"id": 78190188,
"node_id": "MDQ6VXNlcjc4MTkwMTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BHM-RB",
"html_url": "https://github.com/BHM-RB",
"followers_url": "https://api.github.com/users/BHM-RB/followers",
"following_url": "https://api.github.com/users/BHM-RB/following{/other_user}",
"gists_url": "https://api.github.com/users/BHM-RB/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BHM-RB/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BHM-RB/subscriptions",
"organizations_url": "https://api.github.com/users/BHM-RB/orgs",
"repos_url": "https://api.github.com/users/BHM-RB/repos",
"events_url": "https://api.github.com/users/BHM-RB/events{/privacy}",
"received_events_url": "https://api.github.com/users/BHM-RB/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"
] | 1,617,855,714,000 | 1,617,883,999,000 | 1,617,883,999,000 | NONE | null | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful? | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2186/comments | https://api.github.com/repos/huggingface/datasets/issues/2186/events | https://github.com/huggingface/datasets/pull/2186 | 852,840,819 | MDExOlB1bGxSZXF1ZXN0NjExMDMxNzE0 | 2,186 | GEM: new challenge sets | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"cc @sebastiangehrmann"
] | 1,617,831,547,000 | 1,617,832,595,000 | 1,617,832,595,000 | MEMBER | null | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2186/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 1,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2186/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2186",
"html_url": "https://github.com/huggingface/datasets/pull/2186",
"diff_url": "https://github.com/huggingface/datasets/pull/2186.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2186.patch",
"merged_at": 1617832595000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2185/comments | https://api.github.com/repos/huggingface/datasets/issues/2185/events | https://github.com/huggingface/datasets/issues/2185 | 852,684,395 | MDU6SXNzdWU4NTI2ODQzOTU= | 2,185 | .map() and distributed training | {
"login": "VictorSanh",
"id": 16107619,
"node_id": "MDQ6VXNlcjE2MTA3NjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VictorSanh",
"html_url": "https://github.com/VictorSanh",
"followers_url": "https://api.github.com/users/VictorSanh/followers",
"following_url": "https://api.github.com/users/VictorSanh/following{/other_user}",
"gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions",
"organizations_url": "https://api.github.com/users/VictorSanh/orgs",
"repos_url": "https://api.github.com/users/VictorSanh/repos",
"events_url": "https://api.github.com/users/VictorSanh/events{/privacy}",
"received_events_url": "https://api.github.com/users/VictorSanh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seem... | 1,617,819,734,000 | 1,634,973,075,000 | 1,617,982,711,000 | MEMBER | null | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
)
```
I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split).
When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect.
Everything so far was done by launching a **single process script**.
I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files.
I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it.
**My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training.
- I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case)
- I am using 1.5.0 version of datasets if that matters. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2185/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2184/comments | https://api.github.com/repos/huggingface/datasets/issues/2184/events | https://github.com/huggingface/datasets/pull/2184 | 852,597,258 | MDExOlB1bGxSZXF1ZXN0NjEwODIxMTc0 | 2,184 | Implementation of class_encode_column | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Made the required changes @lhoestq , sorry it took so much time!"
] | 1,617,814,063,000 | 1,618,573,477,000 | 1,618,572,419,000 | CONTRIBUTOR | null | Addresses #2176
I'm happy to discuss the API and internals! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2184/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2184",
"html_url": "https://github.com/huggingface/datasets/pull/2184",
"diff_url": "https://github.com/huggingface/datasets/pull/2184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2184.patch",
"merged_at": 1618572419000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2183/comments | https://api.github.com/repos/huggingface/datasets/issues/2183/events | https://github.com/huggingface/datasets/pull/2183 | 852,518,411 | MDExOlB1bGxSZXF1ZXN0NjEwNzU3MjUz | 2,183 | Fix s3fs tests for py36 and py37+ | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,808,631,000 | 1,617,872,085,000 | 1,617,872,084,000 | MEMBER | null | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in server mode to support running the tests on python>=3.7 with the latest version of `fsspec` and `s3fs`.
cc @philschmid | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2183/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2183/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2183",
"html_url": "https://github.com/huggingface/datasets/pull/2183",
"diff_url": "https://github.com/huggingface/datasets/pull/2183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2183.patch",
"merged_at": 1617872084000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2182/comments | https://api.github.com/repos/huggingface/datasets/issues/2182/events | https://github.com/huggingface/datasets/pull/2182 | 852,384,872 | MDExOlB1bGxSZXF1ZXN0NjEwNjQ2MDIy | 2,182 | Set default in-memory value depending on the dataset size | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"id": 6644198,
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"title": "1.6",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 4,
"state": "closed",
"created_at": 1617973671000,
"updated_at": 1618937446000,
"due_on": 1618556400000,
"closed_at": 1618937446000
} | [
"I ping @krandiash to keep him up to date.",
"TODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~",
"@lhoestq I have a questi... | 1,617,800,418,000 | 1,618,928,412,000 | 1,618,913,044,000 | MEMBER | null | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~ | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2182/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2182",
"html_url": "https://github.com/huggingface/datasets/pull/2182",
"diff_url": "https://github.com/huggingface/datasets/pull/2182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2182.patch",
"merged_at": 1618913043000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2181/comments | https://api.github.com/repos/huggingface/datasets/issues/2181/events | https://github.com/huggingface/datasets/issues/2181 | 852,261,607 | MDU6SXNzdWU4NTIyNjE2MDc= | 2,181 | Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | {
"login": "hwijeen",
"id": 29157715,
"node_id": "MDQ6VXNlcjI5MTU3NzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hwijeen",
"html_url": "https://github.com/hwijeen",
"followers_url": "https://api.github.com/users/hwijeen/followers",
"following_url": "https://api.github.com/users/hwijeen/following{/other_user}",
"gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions",
"organizations_url": "https://api.github.com/users/hwijeen/orgs",
"repos_url": "https://api.github.com/users/hwijeen/repos",
"events_url": "https://api.github.com/users/hwijeen/events{/privacy}",
"received_events_url": "https://api.github.com/users/hwijeen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well... | 1,617,791,206,000 | 1,618,211,755,000 | 1,618,211,755,000 | NONE | null | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir
yield tmp_dir
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split
for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables
parse_options=self.config.pa_parse_options,
File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
When using only a small portion of the sample file, say first 100 lines, it works perfectly well..
I see that it is the error from pyarrow, but could you give me a hint or possible solutions?
#369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2181/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2180/comments | https://api.github.com/repos/huggingface/datasets/issues/2180/events | https://github.com/huggingface/datasets/pull/2180 | 852,258,635 | MDExOlB1bGxSZXF1ZXN0NjEwNTQxOTA2 | 2,180 | Add tel to xtreme tatoeba | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,790,995,000 | 1,617,810,635,000 | 1,617,810,634,000 | MEMBER | null | This should fix issue #2149 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2180/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2180",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"merged_at": 1617810634000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2179/comments | https://api.github.com/repos/huggingface/datasets/issues/2179/events | https://github.com/huggingface/datasets/issues/2179 | 852,237,957 | MDU6SXNzdWU4NTIyMzc5NTc= | 2,179 | Load small datasets in-memory instead of using memory map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6... | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
} | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 1,617,789,496,000 | 1,618,913,044,000 | 1,618,913,043,000 | MEMBER | null | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk)
- but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed.
Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2179/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2178/comments | https://api.github.com/repos/huggingface/datasets/issues/2178/events | https://github.com/huggingface/datasets/pull/2178 | 852,215,058 | MDExOlB1bGxSZXF1ZXN0NjEwNTA1Mjg1 | 2,178 | Fix cast memory usage by using map on subtables | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"id": 6644198,
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"title": "1.6",
"description": "Next minor release",
"creator": {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
},
"open_issues": 0,
"closed_issues": 4,
"state": "closed",
"created_at": 1617973671000,
"updated_at": 1618937446000,
"due_on": 1618556400000,
"closed_at": 1618937446000
} | [
"I addressed your comments about the docstrings and the output validation :)",
"I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.",
"Thanks @lhoestq and @albertvillanova !"
] | 1,617,787,850,000 | 1,618,928,444,000 | 1,618,306,096,000 | MEMBER | null | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, to make it return pyarrow tables instead of python dicts. This way one can use pyarrow transforms directly when using `map`.
edit: we'll use the same mechanism for `filter` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2178/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2178/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2178",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"merged_at": 1618306096000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2177/comments | https://api.github.com/repos/huggingface/datasets/issues/2177/events | https://github.com/huggingface/datasets/pull/2177 | 852,065,307 | MDExOlB1bGxSZXF1ZXN0NjEwMzc5MDYx | 2,177 | add social thumbnial | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/users/philschmid/followers",
"following_url": "https://api.github.com/users/philschmid/following{/other_user}",
"gists_url": "https://api.github.com/users/philschmid/gists{/gist_id}",
"starred_url": "https://api.github.com/users/philschmid/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/philschmid/subscriptions",
"organizations_url": "https://api.github.com/users/philschmid/orgs",
"repos_url": "https://api.github.com/users/philschmid/repos",
"events_url": "https://api.github.com/users/philschmid/events{/privacy}",
"received_events_url": "https://api.github.com/users/philschmid/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,777,606,000 | 1,617,783,361,000 | 1,617,783,361,000 | MEMBER | null | # What does this PR do?
I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails.

To be able to add these I needed to install `sphinxext-opengraph`. I came across this [issue](https://github.com/readthedocs/readthedocs.org/issues/1758) on the readthedocs repo saying that since someone has built this plugin they are not integrating and providing documentation to it. That's why I added it for creating the documentation. The repository can be found [here](https://github.com/wpilibsuite/sphinxext-opengraph/tree/main).
P.S. It seemed that `make style` never ran for `docs/` i hope the changes are okay otherwise I'll revert it. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2177/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2177/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2177",
"html_url": "https://github.com/huggingface/datasets/pull/2177",
"diff_url": "https://github.com/huggingface/datasets/pull/2177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2177.patch",
"merged_at": 1617783361000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2175/comments | https://api.github.com/repos/huggingface/datasets/issues/2175/events | https://github.com/huggingface/datasets/issues/2175 | 851,836,096 | MDU6SXNzdWU4NTE4MzYwOTY= | 2,175 | dataset.search_batch() function outputs all -1 indices sometime. | {
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.",
"@lhoestq @patrickvonplaten \r\n\r\nI also found another short... | 1,617,745,849,000 | 1,618,575,676,000 | 1,618,575,675,000 | NONE | null | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker.

Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ?
Is this a problem of the index, where the faiss can't find any similar vector?
Is there documentation on the output index being -1?
@lhoestq
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2175/timeline | null | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2174/comments | https://api.github.com/repos/huggingface/datasets/issues/2174/events | https://github.com/huggingface/datasets/pull/2174 | 851,383,675 | MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2 | 2,174 | Pin docutils for better doc | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugger/followers",
"following_url": "https://api.github.com/users/sgugger/following{/other_user}",
"gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sgugger/subscriptions",
"organizations_url": "https://api.github.com/users/sgugger/orgs",
"repos_url": "https://api.github.com/users/sgugger/repos",
"events_url": "https://api.github.com/users/sgugger/events{/privacy}",
"received_events_url": "https://api.github.com/users/sgugger/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,712,820,000 | 1,617,713,753,000 | 1,617,713,753,000 | MEMBER | null | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx).
You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2174/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2174",
"html_url": "https://github.com/huggingface/datasets/pull/2174",
"diff_url": "https://github.com/huggingface/datasets/pull/2174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2174.patch",
"merged_at": 1617713753000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2173/comments | https://api.github.com/repos/huggingface/datasets/issues/2173/events | https://github.com/huggingface/datasets/pull/2173 | 851,359,284 | MDExOlB1bGxSZXF1ZXN0NjA5Nzk2NzI2 | 2,173 | Add OpenSLR dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,710,914,000 | 1,618,246,486,000 | 1,618,246,486,000 | CONTRIBUTOR | null | OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR63, SLR64, SLR65, SLR66 and SLR69 (Javanese, Khmer, Nepali and Sundanese, Malayalam, Marathi, Tamil, Telugu and Catalan). I can add other speech datasets gradually next time. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2173/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2173/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2173",
"html_url": "https://github.com/huggingface/datasets/pull/2173",
"diff_url": "https://github.com/huggingface/datasets/pull/2173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2173.patch",
"merged_at": 1618246485000
} | true |
https://api.github.com/repos/huggingface/datasets/issues/2172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2172/comments | https://api.github.com/repos/huggingface/datasets/issues/2172/events | https://github.com/huggingface/datasets/pull/2172 | 851,229,399 | MDExOlB1bGxSZXF1ZXN0NjA5Njg4ODgx | 2,172 | Pin fsspec lower than 0.9.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 1,617,700,749,000 | 1,617,702,567,000 | 1,617,702,566,000 | MEMBER | null | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/2172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/2172/timeline | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2172",
"html_url": "https://github.com/huggingface/datasets/pull/2172",
"diff_url": "https://github.com/huggingface/datasets/pull/2172.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2172.patch",
"merged_at": 1617702566000
} | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.