url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.28B
node_id
stringlengths
18
32
number
int64
1
4.53k
title
stringlengths
1
276
user
dict
labels
list
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
list
milestone
dict
comments
list
created_at
int64
1,587B
1,656B
updated_at
int64
1,587B
1,656B
closed_at
int64
1,587B
1,656B
author_association
stringclasses
3 values
active_lock_reason
null
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
1 value
draft
bool
2 classes
pull_request
dict
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/1774
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1774/comments
https://api.github.com/repos/huggingface/datasets/issues/1774/events
https://github.com/huggingface/datasets/issues/1774
792,730,559
MDU6SXNzdWU3OTI3MzA1NTk=
1,774
is it possible to make slice to be more compatible like python list and numpy?
{ "login": "world2vec", "id": 7607120, "node_id": "MDQ6VXNlcjc2MDcxMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/world2vec", "html_url": "https://github.com/world2vec", "followers_url": "https://api.github.com/users/wo...
[]
open
false
{ "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https://api.github.com/use...
[ { "login": "mariosasko", "id": 47462742, "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mariosasko", "html_url": "https://github.com/mariosasko", "followers_url": "https:...
null
[ "Hi ! Thanks for reporting.\r\nI am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing.\r\nIf you have some code to reproduce the issue it would be nice so I can make sure that this case will be supported :)\r\nI'll make a PR in a few days ", "Good i...
1,611,468,952,000
1,654,098,890,000
null
NONE
null
Hi, see below error: ``` AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples. ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1774/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1773
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1773/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1773/comments
https://api.github.com/repos/huggingface/datasets/issues/1773/events
https://github.com/huggingface/datasets/issues/1773
792,708,160
MDU6SXNzdWU3OTI3MDgxNjA=
1,773
bug in loading datasets
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
closed
false
null
[]
null
[ "Looks like an issue with your csv file. Did you use the right delimiter ?\r\nApparently at line 37 the CSV reader from pandas reads 2 fields instead of 1.", "Note that you can pass any argument you would pass to `pandas.read_csv` as kwargs to `load_dataset`. For example you can do\r\n```python\r\nfrom datasets i...
1,611,456,825,000
1,630,918,486,000
1,628,100,781,000
NONE
null
Hi, I need to load a dataset, I use these commands: ``` from datasets import load_dataset dataset = load_dataset('csv', data_files={'train': 'sick/train.csv', 'test': 'sick/test.csv', 'validation': 'sick/validation.csv'}) prin...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1773/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1773/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1772
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1772/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1772/comments
https://api.github.com/repos/huggingface/datasets/issues/1772/events
https://github.com/huggingface/datasets/issues/1772
792,703,797
MDU6SXNzdWU3OTI3MDM3OTc=
1,772
Adding SICK dataset
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,611,454,531,000
1,612,540,165,000
1,612,540,165,000
NONE
null
Hi It would be great to include SICK dataset. ## Adding a Dataset - **Name:** SICK - **Description:** a well known entailment dataset - **Paper:** http://marcobaroni.org/composes/sick.html - **Data:** http://marcobaroni.org/composes/sick.html - **Motivation:** this is an important NLI benchmark Instruction...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1772/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1772/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1771
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1771/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1771/comments
https://api.github.com/repos/huggingface/datasets/issues/1771/events
https://github.com/huggingface/datasets/issues/1771
792,701,276
MDU6SXNzdWU3OTI3MDEyNzY=
1,771
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py
{ "login": "world2vec", "id": 7607120, "node_id": "MDQ6VXNlcjc2MDcxMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/world2vec", "html_url": "https://github.com/world2vec", "followers_url": "https://api.github.com/users/wo...
[]
closed
false
null
[]
null
[ "I temporary manually download csv.py as custom dataset loading script", "Indeed in 1.2.1 the script to process csv file is downloaded. Starting from the next release though we include the csv processing directly in the library.\r\nSee PR #1726 \r\nWe'll do a new release soon :)", "Thanks." ]
1,611,453,232,000
1,611,529,589,000
1,611,529,589,000
NONE
null
Hi, When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset? ``` Traceback (most recent call last): File "/home/tom/pyenv/pystory/lib/python3.6/site-p...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1771/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1771/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1770
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1770/comments
https://api.github.com/repos/huggingface/datasets/issues/1770/events
https://github.com/huggingface/datasets/issues/1770
792,698,148
MDU6SXNzdWU3OTI2OTgxNDg=
1,770
how can I combine 2 dataset with different/same features?
{ "login": "world2vec", "id": 7607120, "node_id": "MDQ6VXNlcjc2MDcxMjA=", "avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4", "gravatar_id": "", "url": "https://api.github.com/users/world2vec", "html_url": "https://github.com/world2vec", "followers_url": "https://api.github.com/users/wo...
[]
closed
false
null
[]
null
[ "Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issuecomment-727872188", "Good to hear.\r\nCurrently I ...
1,611,451,566,000
1,654,098,195,000
1,654,098,195,000
NONE
null
to combine 2 dataset by one-one map like ds = zip(ds1, ds2): ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'} or different feature: ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1770/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1769
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1769/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1769/comments
https://api.github.com/repos/huggingface/datasets/issues/1769/events
https://github.com/huggingface/datasets/issues/1769
792,523,284
MDU6SXNzdWU3OTI1MjMyODQ=
1,769
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
{ "login": "shuaihuaiyi", "id": 14048129, "node_id": "MDQ6VXNlcjE0MDQ4MTI5", "avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4", "gravatar_id": "", "url": "https://api.github.com/users/shuaihuaiyi", "html_url": "https://github.com/shuaihuaiyi", "followers_url": "https://api.github.com/...
[]
open
false
null
[]
null
[ "More information: `run_mlm.py` will raise same error when `data_args.line_by_line==True`\r\n\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/examples/language-modeling/run_mlm.py#L300\r\n", "Hi ! What version of python and datasets do you have ? And also what version ...
1,611,396,780,000
1,611,570,237,000
null
NONE
null
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine. The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py Script args: ``` --model_name_or_path ../../../model/chine...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1769/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1769/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1768/comments
https://api.github.com/repos/huggingface/datasets/issues/1768/events
https://github.com/huggingface/datasets/pull/1768
792,150,745
MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx
1,768
Mention kwargs in the Dataset Formatting docs
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,611,333,800,000
1,612,096,390,000
1,611,566,099,000
CONTRIBUTOR
null
Hi, This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed. To prevent people from having to check the code/method docs, I just added a couple of lines in the docs. Please let me know your thoughts on this. Thanks, Gunjan @lho...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1768/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1768", "html_url": "https://github.com/huggingface/datasets/pull/1768", "diff_url": "https://github.com/huggingface/datasets/pull/1768.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1768.patch", "merged_at": 1611566099000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1767
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1767/comments
https://api.github.com/repos/huggingface/datasets/issues/1767/events
https://github.com/huggingface/datasets/pull/1767
792,068,497
MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2
1,767
Add Librispeech ASR
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "> Awesome thank you !\r\n> \r\n> The dummy data are quite big but it was expected given that the raw files are flac files.\r\n> Given that the script doesn't even read the flac files I think we can remove them. Or maybe use empty flac files (see [here](https://hydrogenaud.io/index.php?topic=118685.0) for example)....
1,611,327,277,000
1,611,607,087,000
1,611,607,062,000
MEMBER
null
This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360". As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1767/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1767", "html_url": "https://github.com/huggingface/datasets/pull/1767", "diff_url": "https://github.com/huggingface/datasets/pull/1767.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1767.patch", "merged_at": 1611607062000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1766
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1766/comments
https://api.github.com/repos/huggingface/datasets/issues/1766/events
https://github.com/huggingface/datasets/issues/1766
792,044,105
MDU6SXNzdWU3OTIwNDQxMDU=
1,766
Issues when run two programs compute the same metrics
{ "login": "lamthuy", "id": 8089862, "node_id": "MDQ6VXNlcjgwODk4NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lamthuy", "html_url": "https://github.com/lamthuy", "followers_url": "https://api.github.com/users/lamthuy/...
[]
closed
false
null
[]
null
[ "Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace \"default_experiment\" with the experiment id that you provide in the arrow filename. \r\n\r\nAlso when two `experiment_id` collide we're supposed to detect it using our locking mechanism....
1,611,325,375,000
1,612,262,286,000
1,612,262,286,000
NONE
null
I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches: ``` File "train_matching_min.py", line 160, in <module>ch...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1766/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1765
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1765/comments
https://api.github.com/repos/huggingface/datasets/issues/1765/events
https://github.com/huggingface/datasets/issues/1765
791,553,065
MDU6SXNzdWU3OTE1NTMwNjU=
1,765
Error iterating over Dataset with DataLoader
{ "login": "EvanZ", "id": 1295082, "node_id": "MDQ6VXNlcjEyOTUwODI=", "avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/EvanZ", "html_url": "https://github.com/EvanZ", "followers_url": "https://api.github.com/users/EvanZ/follower...
[]
closed
false
null
[]
null
[ "Instead of:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n```\r\nIt should be:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n```\r\n\r\n`batch_sampler` accepts a Sampler object or an Iterable, so you get an error.", "@...
1,611,269,805,000
1,638,879,753,000
1,611,373,454,000
NONE
null
I have a Dataset that I've mapped a tokenizer over: ``` encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids']) encoded_dataset[:1] ``` ``` {'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]), 'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1765/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1764/comments
https://api.github.com/repos/huggingface/datasets/issues/1764/events
https://github.com/huggingface/datasets/issues/1764
791,486,860
MDU6SXNzdWU3OTE0ODY4NjA=
1,764
Connection Issues
{ "login": "SaeedNajafi", "id": 12455298, "node_id": "MDQ6VXNlcjEyNDU1Mjk4", "avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SaeedNajafi", "html_url": "https://github.com/SaeedNajafi", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "Academic WIFI was blocking." ]
1,611,262,569,000
1,611,262,819,000
1,611,262,802,000
NONE
null
Today, I am getting connection issues while loading a dataset and the metric. ``` Traceback (most recent call last): File "src/train.py", line 180, in <module> train_dataset, dev_dataset, test_dataset = create_race_dataset() File "src/train.py", line 130, in create_race_dataset train_dataset = load_da...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1764/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1763
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1763/comments
https://api.github.com/repos/huggingface/datasets/issues/1763/events
https://github.com/huggingface/datasets/pull/1763
791,389,763
MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1
1,763
PAWS-X: Fix csv Dictreader splitting data on quotes
{ "login": "gowtham1997", "id": 9641196, "node_id": "MDQ6VXNlcjk2NDExOTY=", "avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gowtham1997", "html_url": "https://github.com/gowtham1997", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
[]
1,611,253,261,000
1,611,310,473,000
1,611,310,425,000
CONTRIBUTOR
null
```python from datasets import load_dataset # load english paws-x dataset datasets = load_dataset('paws-x', 'en') print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1] ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1763/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1763", "html_url": "https://github.com/huggingface/datasets/pull/1763", "diff_url": "https://github.com/huggingface/datasets/pull/1763.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1763.patch", "merged_at": 1611310425000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1762
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1762/comments
https://api.github.com/repos/huggingface/datasets/issues/1762/events
https://github.com/huggingface/datasets/issues/1762
791,226,007
MDU6SXNzdWU3OTEyMjYwMDc=
1,762
Unable to format dataset to CUDA Tensors
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi ! You can get CUDA tensors with\r\n\r\n```python\r\ndataset.set_format(\"torch\", columns=columns, device=\"cuda\")\r\n```\r\n\r\nIndeed `set_format` passes the `**kwargs` to `torch.tensor`", "Hi @lhoestq,\r\n\r\nThanks a lot. Is this true for all format types?\r\n\r\nAs in, for 'torch', I can have `**kwargs`...
1,611,243,083,000
1,612,250,002,000
1,612,250,002,000
CONTRIBUTOR
null
Hi, I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors. I tried this, but Dataset doesn't suppor...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1762/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1761
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1761/comments
https://api.github.com/repos/huggingface/datasets/issues/1761/events
https://github.com/huggingface/datasets/pull/1761
791,150,858
MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw
1,761
Add SILICONE benchmark
{ "login": "eusip", "id": 1551356, "node_id": "MDQ6VXNlcjE1NTEzNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eusip", "html_url": "https://github.com/eusip", "followers_url": "https://api.github.com/users/eusip/follower...
[]
closed
false
null
[]
null
[ "Thanks for the feedback. All your comments have been addressed!", "Thank you for your constructive feedback! I now know how to best format future datasets that our team plans to publish in the near future :)", "Awesome ! Looking forward to it :) ", "Hi @lhoestq ! One last question. Our research team would li...
1,611,239,352,000
1,612,449,168,000
1,611,669,031,000
CONTRIBUTOR
null
My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication. This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1761/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1761", "html_url": "https://github.com/huggingface/datasets/pull/1761", "diff_url": "https://github.com/huggingface/datasets/pull/1761.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1761.patch", "merged_at": 1611669031000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1760
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1760/comments
https://api.github.com/repos/huggingface/datasets/issues/1760/events
https://github.com/huggingface/datasets/pull/1760
791,110,857
MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0
1,760
More tags
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Conll has `multilingual` but is only tagged as `en`", "good catch, that was a bad copy paste x)" ]
1,611,237,010,000
1,611,308,401,000
1,611,308,400,000
MEMBER
null
Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1760/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1760", "html_url": "https://github.com/huggingface/datasets/pull/1760", "diff_url": "https://github.com/huggingface/datasets/pull/1760.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1760.patch", "merged_at": 1611308400000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1759
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1759/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1759/comments
https://api.github.com/repos/huggingface/datasets/issues/1759/events
https://github.com/huggingface/datasets/issues/1759
790,992,226
MDU6SXNzdWU3OTA5OTIyMjY=
1,759
wikipedia dataset incomplete
{ "login": "ChrisDelClea", "id": 19912393, "node_id": "MDQ6VXNlcjE5OTEyMzkz", "avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ChrisDelClea", "html_url": "https://github.com/ChrisDelClea", "followers_url": "https://api.github.c...
[]
closed
false
null
[]
null
[ "Hi !\r\nFrom what pickle file fo you get this ?\r\nI guess you mean the dataset loaded using `load_dataset` ?", "yes sorry, I used the `load_dataset`function and saved the data to a pickle file so I don't always have to reload it and are able to work offline. ", "The wikipedia articles are processed using the ...
1,611,229,635,000
1,611,249,731,000
1,611,249,666,000
NONE
null
Hey guys, I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset. Unfortunately, I found out that there is an incompleteness for the German dataset. For reasons unknown to me, the number of inhabitants has been removed from many pages: Thorey-sur-Ouche has 128 inhabitants a...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1759/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1759/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1758
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1758/comments
https://api.github.com/repos/huggingface/datasets/issues/1758/events
https://github.com/huggingface/datasets/issues/1758
790,626,116
MDU6SXNzdWU3OTA2MjYxMTY=
1,758
dataset.search() (elastic) cannot reliably retrieve search results
{ "login": "afogarty85", "id": 49048309, "node_id": "MDQ6VXNlcjQ5MDQ4MzA5", "avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/afogarty85", "html_url": "https://github.com/afogarty85", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?", "Thanks for the feedback! I added a 30 second \"sleep\" and that seemed to work well!" ]
1,611,195,997,000
1,611,275,150,000
1,611,275,150,000
NONE
null
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices. The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer. I am indexing data t...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1758/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1757
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1757/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1757/comments
https://api.github.com/repos/huggingface/datasets/issues/1757/events
https://github.com/huggingface/datasets/issues/1757
790,466,509
MDU6SXNzdWU3OTA0NjY1MDk=
1,757
FewRel
{ "login": "dspoka", "id": 6183050, "node_id": "MDQ6VXNlcjYxODMwNTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dspoka", "html_url": "https://github.com/dspoka", "followers_url": "https://api.github.com/users/dspoka/foll...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[ "+1", "@dspoka Please check the following link : https://github.com/thunlp/FewRel\r\nThis link mentions two versions of the datasets. Also, this one seems to be the official link.\r\n\r\nI am assuming this is the correct link and implementing based on the same.", "Hi @lhoestq,\r\n\r\nThis issue can be closed, I...
1,611,186,963,000
1,615,258,325,000
1,615,214,092,000
NONE
null
## Adding a Dataset - **Name:** FewRel - **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset - **Paper:** @inproceedings{han2018fewrel, title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation}, auth...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1757/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1757/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1756
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1756/comments
https://api.github.com/repos/huggingface/datasets/issues/1756/events
https://github.com/huggingface/datasets/issues/1756
790,380,028
MDU6SXNzdWU3OTAzODAwMjg=
1,756
Ccaligned multilingual translation dataset
{ "login": "flozi00", "id": 47894090, "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "gravatar_id": "", "url": "https://api.github.com/users/flozi00", "html_url": "https://github.com/flozi00", "followers_url": "https://api.github.com/users/flozi0...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
null
[]
null
[]
1,611,181,124,000
1,614,594,981,000
1,614,594,981,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1756/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1755
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1755/comments
https://api.github.com/repos/huggingface/datasets/issues/1755/events
https://github.com/huggingface/datasets/issues/1755
790,324,734
MDU6SXNzdWU3OTAzMjQ3MzQ=
1,755
Using select/reordering datasets slows operations down immensely
{ "login": "afogarty85", "id": 49048309, "node_id": "MDQ6VXNlcjQ5MDQ4MzA5", "avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4", "gravatar_id": "", "url": "https://api.github.com/users/afogarty85", "html_url": "https://github.com/afogarty85", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "You can use `Dataset.flatten_indices()` to make it fast after a select or shuffle.", "Thanks for the input! I gave that a try by adding this after my selection / reordering operations, but before the big computation task of `score_squad`\r\n\r\n```\r\nexamples = examples.flatten_indices()\r\nfeatures = features....
1,611,177,132,000
1,611,180,219,000
1,611,180,219,000
NONE
null
I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour. The below examp...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1755/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1754
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1754/comments
https://api.github.com/repos/huggingface/datasets/issues/1754/events
https://github.com/huggingface/datasets/pull/1754
789,881,730
MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw
1,754
Use a config id in the cache directory names for custom configs
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,611,141,060,000
1,611,565,927,000
1,611,565,926,000
MEMBER
null
As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config. For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes: ```python from ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1754/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1754", "html_url": "https://github.com/huggingface/datasets/pull/1754", "diff_url": "https://github.com/huggingface/datasets/pull/1754.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1754.patch", "merged_at": 1611565926000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1753
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1753/comments
https://api.github.com/repos/huggingface/datasets/issues/1753/events
https://github.com/huggingface/datasets/pull/1753
789,867,685
MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx
1,753
fix comet citations
{ "login": "ricardorei", "id": 17256847, "node_id": "MDQ6VXNlcjE3MjU2ODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ricardorei", "html_url": "https://github.com/ricardorei", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,611,139,958,000
1,611,153,570,000
1,611,153,570,000
CONTRIBUTOR
null
I realized COMET citations were not showing in the hugging face metrics page: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png"> This pull request is intended to fix that. Thanks!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1753/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1753", "html_url": "https://github.com/huggingface/datasets/pull/1753", "diff_url": "https://github.com/huggingface/datasets/pull/1753.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1753.patch", "merged_at": 1611153570000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1752
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1752/comments
https://api.github.com/repos/huggingface/datasets/issues/1752/events
https://github.com/huggingface/datasets/pull/1752
789,822,459
MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5
1,752
COMET metric citation
{ "login": "ricardorei", "id": 17256847, "node_id": "MDQ6VXNlcjE3MjU2ODQ3", "avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ricardorei", "html_url": "https://github.com/ricardorei", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "I think its better to create a new branch with this fix. I forgot I was still using the old branch." ]
1,611,136,483,000
1,611,138,427,000
1,611,138,302,000
CONTRIBUTOR
null
In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1752/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1752", "html_url": "https://github.com/huggingface/datasets/pull/1752", "diff_url": "https://github.com/huggingface/datasets/pull/1752.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1752.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1751
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1751/comments
https://api.github.com/repos/huggingface/datasets/issues/1751/events
https://github.com/huggingface/datasets/pull/1751
789,232,980
MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2
1,751
Updated README for the Social Bias Frames dataset
{ "login": "mcmillanmajora", "id": 26722925, "node_id": "MDQ6VXNlcjI2NzIyOTI1", "avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mcmillanmajora", "html_url": "https://github.com/mcmillanmajora", "followers_url": "https://api.gi...
[]
closed
false
null
[]
null
[]
1,611,078,780,000
1,611,154,612,000
1,611,154,612,000
CONTRIBUTOR
null
See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1751/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1751", "html_url": "https://github.com/huggingface/datasets/pull/1751", "diff_url": "https://github.com/huggingface/datasets/pull/1751.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1751.patch", "merged_at": 1611154612000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1750
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1750/comments
https://api.github.com/repos/huggingface/datasets/issues/1750/events
https://github.com/huggingface/datasets/pull/1750
788,668,085
MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1
1,750
Fix typo in README.md of cnn_dailymail
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Good catch, thanks!", "Thank you for merging!" ]
1,611,025,565,000
1,611,054,449,000
1,611,049,723,000
CONTRIBUTOR
null
When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`. I am afraid this is a trivial matter, but I would like to make a suggestion for revision.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1750/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1750", "html_url": "https://github.com/huggingface/datasets/pull/1750", "diff_url": "https://github.com/huggingface/datasets/pull/1750.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1750.patch", "merged_at": 1611049723000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1749
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1749/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1749/comments
https://api.github.com/repos/huggingface/datasets/issues/1749/events
https://github.com/huggingface/datasets/pull/1749
788,476,639
MDExOlB1bGxSZXF1ZXN0NTU2OTgxMDc5
1,749
Added metadata and correct splits for swda.
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmi...
[]
closed
false
null
[]
null
[ "I will push updates tomorrow.", "@lhoestq thank you for your comments! I went ahead and fixed the code 😃. Please let me know if I missed anything." ]
1,610,994,992,000
1,611,948,952,000
1,611,945,488,000
CONTRIBUTOR
null
Switchboard Dialog Act Corpus I made some changes following @bhavitvyamalik recommendation in #1678: * Contains all metadata. * Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo. * Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1749/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1749/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1749", "html_url": "https://github.com/huggingface/datasets/pull/1749", "diff_url": "https://github.com/huggingface/datasets/pull/1749.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1749.patch", "merged_at": 1611945488000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1748
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1748/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1748/comments
https://api.github.com/repos/huggingface/datasets/issues/1748/events
https://github.com/huggingface/datasets/pull/1748
788,431,642
MDExOlB1bGxSZXF1ZXN0NTU2OTQ0NDEx
1,748
add Stuctured Argument Extraction for Korean dataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/ste...
[]
closed
false
null
[]
null
[]
1,610,990,059,000
1,631,897,598,000
1,611,055,618,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1748/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1748/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1748", "html_url": "https://github.com/huggingface/datasets/pull/1748", "diff_url": "https://github.com/huggingface/datasets/pull/1748.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1748.patch", "merged_at": 1611055618000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1747
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1747/comments
https://api.github.com/repos/huggingface/datasets/issues/1747/events
https://github.com/huggingface/datasets/issues/1747
788,299,775
MDU6SXNzdWU3ODgyOTk3NzU=
1,747
datasets slicing with seed
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
open
false
null
[]
null
[ "Hi :) \r\nThe slicing API from https://huggingface.co/docs/datasets/splits.html doesn't shuffle the data.\r\nYou can shuffle and then take a subset of your dataset with\r\n```python\r\n# shuffle and take the first 100 examples\r\ndataset = dataset.shuffle(seed=42).select(range(100))\r\n```\r\n\r\nYou can find more...
1,610,978,935,000
1,610,981,134,000
null
NONE
null
Hi I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html I could not find a seed option, could you assist me please how I can get a slice for different seeds? thank you. @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1747/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1746
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1746/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1746/comments
https://api.github.com/repos/huggingface/datasets/issues/1746/events
https://github.com/huggingface/datasets/pull/1746
788,188,184
MDExOlB1bGxSZXF1ZXN0NTU2NzQxMjIw
1,746
Fix release conda worflow
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,969,350,000
1,610,969,484,000
1,610,969,483,000
MEMBER
null
The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1746/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1746/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1746", "html_url": "https://github.com/huggingface/datasets/pull/1746", "diff_url": "https://github.com/huggingface/datasets/pull/1746.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1746.patch", "merged_at": 1610969483000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1745
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1745/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1745/comments
https://api.github.com/repos/huggingface/datasets/issues/1745/events
https://github.com/huggingface/datasets/issues/1745
787,838,256
MDU6SXNzdWU3ODc4MzgyNTY=
1,745
difference between wsc and wsc.fixed for superglue
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
closed
false
null
[]
null
[ "From the description given in the dataset script for `wsc.fixed`:\r\n```\r\nThis version fixes issues where the spans are not actually substrings of the text.\r\n```" ]
1,610,931,019,000
1,610,967,763,000
1,610,931,574,000
NONE
null
Hi I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1745/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1745/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1744
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1744/comments
https://api.github.com/repos/huggingface/datasets/issues/1744/events
https://github.com/huggingface/datasets/pull/1744
787,649,811
MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4
1,744
Add missing "brief" entries to reuters
{ "login": "jbragg", "id": 2238344, "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbragg", "html_url": "https://github.com/jbragg", "followers_url": "https://api.github.com/users/jbragg/foll...
[]
closed
false
null
[]
null
[ "@lhoestq I ran `make style` but CI code quality still failing and I don't have access to logs", "It's also likely that due to the previous placement of the field initialization, much of the data about topics etc was simply wrong and carried over from previous entries. Model scores seem to improve significantly w...
1,610,870,329,000
1,610,969,169,000
1,610,969,169,000
CONTRIBUTOR
null
This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1744/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1744", "html_url": "https://github.com/huggingface/datasets/pull/1744", "diff_url": "https://github.com/huggingface/datasets/pull/1744.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1744.patch", "merged_at": 1610969169000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1743
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1743/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1743/comments
https://api.github.com/repos/huggingface/datasets/issues/1743/events
https://github.com/huggingface/datasets/issues/1743
787,631,412
MDU6SXNzdWU3ODc2MzE0MTI=
1,743
Issue while Creating Custom Metric
{ "login": "gchhablani", "id": 29076344, "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gchhablani", "html_url": "https://github.com/gchhablani", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Currently it's only possible to define the features for the two columns `references` and `predictions`.\r\nThe data for these columns can then be passed to `metric.add_batch` and `metric.compute`.\r\nInstead of defining more columns `text`, `offset_mapping` and `ground` you must include them in either references a...
1,610,866,874,000
1,654,098,574,000
1,654,098,574,000
CONTRIBUTOR
null
Hi Team, I am trying to create a custom metric for my training as follows, where f1 is my own metric: ```python def _info(self): # TODO: Specifies the datasets.MetricInfo object return datasets.MetricInfo( # This is the description that will appear on the metrics page. ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1743/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1743/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1742
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1742/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1742/comments
https://api.github.com/repos/huggingface/datasets/issues/1742/events
https://github.com/huggingface/datasets/pull/1742
787,623,640
MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw
1,742
Add GLUE Compat (compatible with transformers<3.5.0)
{ "login": "JetRunner", "id": 22514219, "node_id": "MDQ6VXNlcjIyNTE0MjE5", "avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JetRunner", "html_url": "https://github.com/JetRunner", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "Maybe it would be simpler to just overwrite the order of the label classes of the `glue` dataset ?\r\n```python\r\nmnli = load_dataset(\"glue\", \"mnli\", label_classes=[\"contradiction\", \"entailment\", \"neutral\"])\r\n```", "Sounds good. Will close the issue if that works." ]
1,610,862,865,000
1,617,021,810,000
1,617,021,810,000
MEMBER
null
Link to our discussion on Slack (HF internal) https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400 The next step is to add a compatible option in the new `run_glue.py` I duplicated `glue` and made the following changes: 1. Change the name to `glue_compat`. 2. Change the label assignments for MN...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1742/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1742/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1742", "html_url": "https://github.com/huggingface/datasets/pull/1742", "diff_url": "https://github.com/huggingface/datasets/pull/1742.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1742.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1741
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1741/comments
https://api.github.com/repos/huggingface/datasets/issues/1741/events
https://github.com/huggingface/datasets/issues/1741
787,327,060
MDU6SXNzdWU3ODczMjcwNjA=
1,741
error when run fine_tuning on text_classification
{ "login": "XiaoYang66", "id": 43234824, "node_id": "MDQ6VXNlcjQzMjM0ODI0", "avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4", "gravatar_id": "", "url": "https://api.github.com/users/XiaoYang66", "html_url": "https://github.com/XiaoYang66", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "none" ]
1,610,763,799,000
1,610,764,768,000
1,610,764,758,000
NONE
null
dataset:sem_eval_2014_task_1 pretrained_model:bert-base-uncased error description: when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1741/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1740
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1740/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1740/comments
https://api.github.com/repos/huggingface/datasets/issues/1740/events
https://github.com/huggingface/datasets/pull/1740
787,264,605
MDExOlB1bGxSZXF1ZXN0NTU2MDA5NjM1
1,740
add id_liputan6 dataset
{ "login": "cahya-wirawan", "id": 7669893, "node_id": "MDQ6VXNlcjc2Njk4OTM=", "avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cahya-wirawan", "html_url": "https://github.com/cahya-wirawan", "followers_url": "https://api.github....
[]
closed
false
null
[]
null
[]
1,610,751,514,000
1,611,150,086,000
1,611,150,086,000
CONTRIBUTOR
null
id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1740/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1740/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1740", "html_url": "https://github.com/huggingface/datasets/pull/1740", "diff_url": "https://github.com/huggingface/datasets/pull/1740.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1740.patch", "merged_at": 1611150086000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1739
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1739/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1739/comments
https://api.github.com/repos/huggingface/datasets/issues/1739/events
https://github.com/huggingface/datasets/pull/1739
787,219,138
MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx
1,739
fixes and improvements for the WebNLG loader
{ "login": "Shimorina", "id": 9607332, "node_id": "MDQ6VXNlcjk2MDczMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/9607332?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Shimorina", "html_url": "https://github.com/Shimorina", "followers_url": "https://api.github.com/users/Sh...
[]
closed
false
null
[]
null
[ "The dataset card is fantastic!\r\n\r\nLooks good to me! Did you check that this still passes the slow tests with the existing dummy data?", "Yes, I ran and passed all the tests specified in [this guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata), inclu...
1,610,747,123,000
1,611,930,846,000
1,611,917,583,000
CONTRIBUTOR
null
- fixes test sets loading in v3.0 - adds additional fields for v3.0_ru - adds info to the WebNLG data card
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1739/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1739/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1739", "html_url": "https://github.com/huggingface/datasets/pull/1739", "diff_url": "https://github.com/huggingface/datasets/pull/1739.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1739.patch", "merged_at": 1611917583000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1738
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1738/comments
https://api.github.com/repos/huggingface/datasets/issues/1738/events
https://github.com/huggingface/datasets/pull/1738
786,068,440
MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4
1,738
Conda support
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/...
[]
closed
false
null
[]
null
[ "Nice thanks :) \r\nNote that in `datasets` the tags are simply the version without the `v`. For example `1.2.1`.", "Do you push tags only for versions?", "Yes I've always used tags only for versions" ]
1,610,637,085,000
1,610,705,300,000
1,610,705,299,000
MEMBER
null
Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`). Will appear here: https://anaconda.org/huggingface/datasets Depends on `conda-forge` for now, so the following is required for installation: ``` conda install -c huggingface -c conda-forge datasets ```
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions", "total_count": 4, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 4, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1738/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1738", "html_url": "https://github.com/huggingface/datasets/pull/1738", "diff_url": "https://github.com/huggingface/datasets/pull/1738.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1738.patch", "merged_at": 1610705298000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1737
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1737/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1737/comments
https://api.github.com/repos/huggingface/datasets/issues/1737/events
https://github.com/huggingface/datasets/pull/1737
785,606,286
MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5
1,737
update link in TLC to be github links
{ "login": "chameleonTK", "id": 6429850, "node_id": "MDQ6VXNlcjY0Mjk4NTA=", "avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4", "gravatar_id": "", "url": "https://api.github.com/users/chameleonTK", "html_url": "https://github.com/chameleonTK", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
[ "Thanks for updating this!" ]
1,610,592,561,000
1,610,619,924,000
1,610,619,924,000
CONTRIBUTOR
null
Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1737/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1737/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1737", "html_url": "https://github.com/huggingface/datasets/pull/1737", "diff_url": "https://github.com/huggingface/datasets/pull/1737.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1737.patch", "merged_at": 1610619924000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1736
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1736/comments
https://api.github.com/repos/huggingface/datasets/issues/1736/events
https://github.com/huggingface/datasets/pull/1736
785,433,854
MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw
1,736
Adjust BrWaC dataset features name
{ "login": "jonatasgrosman", "id": 5097052, "node_id": "MDQ6VXNlcjUwOTcwNTI=", "avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jonatasgrosman", "html_url": "https://github.com/jonatasgrosman", "followers_url": "https://api.gith...
[]
closed
false
null
[]
null
[]
1,610,570,344,000
1,610,620,178,000
1,610,620,178,000
CONTRIBUTOR
null
I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good. Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1736/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1736", "html_url": "https://github.com/huggingface/datasets/pull/1736", "diff_url": "https://github.com/huggingface/datasets/pull/1736.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1736.patch", "merged_at": 1610620178000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1735
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1735/comments
https://api.github.com/repos/huggingface/datasets/issues/1735/events
https://github.com/huggingface/datasets/pull/1735
785,184,740
MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw
1,735
Update add new dataset template
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
[]
closed
false
null
[]
null
[ "Add new \"dataset\"? ;)", "Lol, too used to Transformers ;-)" ]
1,610,550,489,000
1,610,637,361,000
1,610,637,360,000
MEMBER
null
This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1735/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1735", "html_url": "https://github.com/huggingface/datasets/pull/1735", "diff_url": "https://github.com/huggingface/datasets/pull/1735.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1735.patch", "merged_at": 1610637360000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1734
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1734/comments
https://api.github.com/repos/huggingface/datasets/issues/1734/events
https://github.com/huggingface/datasets/pull/1734
784,956,707
MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz
1,734
Fix empty token bug for `thainer` and `lst20`
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,610,531,709,000
1,610,620,938,000
1,610,620,938,000
CONTRIBUTOR
null
add a condition to check if tokens exist before yielding in `thainer` and `lst20`
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1734/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1734", "html_url": "https://github.com/huggingface/datasets/pull/1734", "diff_url": "https://github.com/huggingface/datasets/pull/1734.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1734.patch", "merged_at": 1610620938000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1733
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1733/comments
https://api.github.com/repos/huggingface/datasets/issues/1733/events
https://github.com/huggingface/datasets/issues/1733
784,903,002
MDU6SXNzdWU3ODQ5MDMwMDI=
1,733
connection issue with glue, what is the data url for glue?
{ "login": "ghost", "id": 10137, "node_id": "MDQ6VXNlcjEwMTM3", "avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ghost", "html_url": "https://github.com/ghost", "followers_url": "https://api.github.com/users/ghost/followers", "f...
[]
closed
false
null
[]
null
[ "Hello @juliahane, which config of GLUE causes you trouble?\r\nThe URLs are defined in the dataset script source code: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py" ]
1,610,527,060,000
1,628,100,835,000
1,628,100,835,000
NONE
null
Hi my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not thanks
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1733/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1732
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1732/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1732/comments
https://api.github.com/repos/huggingface/datasets/issues/1732/events
https://github.com/huggingface/datasets/pull/1732
784,874,490
MDExOlB1bGxSZXF1ZXN0NTUzOTkzNTAx
1,732
[GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification.
{ "login": "mounicam", "id": 11708999, "node_id": "MDQ6VXNlcjExNzA4OTk5", "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mounicam", "html_url": "https://github.com/mounicam", "followers_url": "https://api.github.com/users/mou...
[]
closed
false
null
[]
null
[ "Thank you for the feedback! I updated the code. " ]
1,610,524,219,000
1,610,619,581,000
1,610,619,581,000
CONTRIBUTOR
null
We want to use TurkCorpus for validation and testing of the sentence simplification task.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1732/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1732/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1732", "html_url": "https://github.com/huggingface/datasets/pull/1732", "diff_url": "https://github.com/huggingface/datasets/pull/1732.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1732.patch", "merged_at": 1610619580000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1731
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1731/comments
https://api.github.com/repos/huggingface/datasets/issues/1731/events
https://github.com/huggingface/datasets/issues/1731
784,744,674
MDU6SXNzdWU3ODQ3NDQ2NzQ=
1,731
Couldn't reach swda.py
{ "login": "yangp725", "id": 13365326, "node_id": "MDQ6VXNlcjEzMzY1MzI2", "avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yangp725", "html_url": "https://github.com/yangp725", "followers_url": "https://api.github.com/users/yan...
[]
closed
false
null
[]
null
[ "Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface...
1,610,506,660,000
1,610,536,660,000
1,610,536,660,000
NONE
null
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1731/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1730
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1730/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1730/comments
https://api.github.com/repos/huggingface/datasets/issues/1730/events
https://github.com/huggingface/datasets/pull/1730
784,617,525
MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0
1,730
Add MNIST dataset
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
[]
closed
false
null
[]
null
[]
1,610,488,082,000
1,610,533,187,000
1,610,533,186,000
MEMBER
null
This PR adds the MNIST dataset to the library.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1730/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1730/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1730", "html_url": "https://github.com/huggingface/datasets/pull/1730", "diff_url": "https://github.com/huggingface/datasets/pull/1730.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1730.patch", "merged_at": 1610533186000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1729
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1729/comments
https://api.github.com/repos/huggingface/datasets/issues/1729/events
https://github.com/huggingface/datasets/issues/1729
784,565,898
MDU6SXNzdWU3ODQ1NjU4OTg=
1,729
Is there support for Deep learning datasets?
{ "login": "pablodz", "id": 28235457, "node_id": "MDQ6VXNlcjI4MjM1NDU3", "avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pablodz", "html_url": "https://github.com/pablodz", "followers_url": "https://api.github.com/users/pablod...
[]
closed
false
null
[]
null
[ "Hi @ZurMaD!\r\nThanks for your interest in 🤗 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingfa...
1,610,482,961,000
1,617,164,647,000
1,617,164,647,000
NONE
null
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1729/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1728
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1728/comments
https://api.github.com/repos/huggingface/datasets/issues/1728/events
https://github.com/huggingface/datasets/issues/1728
784,458,342
MDU6SXNzdWU3ODQ0NTgzNDI=
1,728
Add an entry to an arrow dataset
{ "login": "ameet-1997", "id": 18645407, "node_id": "MDQ6VXNlcjE4NjQ1NDA3", "avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ameet-1997", "html_url": "https://github.com/ameet-1997", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Hi @ameet-1997,\r\nI think what you are looking for is the `concatenate_datasets` function: https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#concatenate-several-datasets\r\n\r\nFor your use case, I would use the [`map` method](https://huggingface.co/docs/datasets/processing.html?highlight...
1,610,474,507,000
1,610,997,332,000
1,610,997,332,000
NONE
null
Is it possible to add an entry to a dataset object? **Motivation: I want to transform the sentences in the dataset and add them to the original dataset** For example, say we have the following code: ``` python from datasets import load_dataset # Load a dataset and print the first examples in the training s...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1728/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1728/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1727
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1727/comments
https://api.github.com/repos/huggingface/datasets/issues/1727/events
https://github.com/huggingface/datasets/issues/1727
784,435,131
MDU6SXNzdWU3ODQ0MzUxMzE=
1,727
BLEURT score calculation raises UnrecognizedFlagError
{ "login": "nadavo", "id": 6603920, "node_id": "MDQ6VXNlcjY2MDM5MjA=", "avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nadavo", "html_url": "https://github.com/nadavo", "followers_url": "https://api.github.com/users/nadavo/foll...
[]
closed
false
null
[]
null
[ "Upgrading tensorflow to version 2.4.0 solved the issue.", "I still have the same error even with TF 2.4.0.", "And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!", "I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.co...
1,610,472,422,000
1,654,099,562,000
1,654,099,562,000
NONE
null
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`. My environment: ``` python==3.8.5 datasets==1.2.0 tensorflow==2.3.1 cudatoolkit==11.0.221 ``` Test code for reproducing the error: ``` from datasets import load_metric bleurt = load_me...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1727/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1726
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1726/comments
https://api.github.com/repos/huggingface/datasets/issues/1726/events
https://github.com/huggingface/datasets/pull/1726
784,336,370
MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4
1,726
Offline loading
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_d...
1,610,464,917,000
1,644,921,130,000
1,611,074,552,000
MEMBER
null
As discussed in #824 it would be cool to make the library work in offline mode. Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError. This is because `prepare_module` fetches online for the latest vers...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 1, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1726/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1726", "html_url": "https://github.com/huggingface/datasets/pull/1726", "diff_url": "https://github.com/huggingface/datasets/pull/1726.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1726.patch", "merged_at": 1611074552000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1725
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1725/comments
https://api.github.com/repos/huggingface/datasets/issues/1725/events
https://github.com/huggingface/datasets/issues/1725
784,182,273
MDU6SXNzdWU3ODQxODIyNzM=
1,725
load the local dataset
{ "login": "xinjicong", "id": 41193842, "node_id": "MDQ6VXNlcjQxMTkzODQy", "avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4", "gravatar_id": "", "url": "https://api.github.com/users/xinjicong", "html_url": "https://github.com/xinjicong", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit’s not possible to understand it and help you with only this information.", "sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\...
1,610,453,575,000
1,654,099,259,000
1,654,099,259,000
NONE
null
your guidebook's example is like >>>from datasets import load_dataset >>> dataset = load_dataset('json', data_files='my_file.json') but the first arg is path... so how should i do if i want to load the local dataset for model training? i will be grateful if you can help me handle this problem! thanks a lot!
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1725/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1723
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1723/comments
https://api.github.com/repos/huggingface/datasets/issues/1723/events
https://github.com/huggingface/datasets/pull/1723
783,982,100
MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1
1,723
ADD S3 support for downloading and uploading processed datasets
{ "login": "philschmid", "id": 32632186, "node_id": "MDQ6VXNlcjMyNjMyMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/philschmid", "html_url": "https://github.com/philschmid", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystem...
1,610,435,854,000
1,611,680,528,000
1,611,680,528,000
MEMBER
null
# What does this PR do? This PR adds the functionality to load and save `datasets` from and to s3. You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`. You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`. Lo...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 3, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1723/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1723", "html_url": "https://github.com/huggingface/datasets/pull/1723", "diff_url": "https://github.com/huggingface/datasets/pull/1723.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1723.patch", "merged_at": 1611680527000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1724
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1724/comments
https://api.github.com/repos/huggingface/datasets/issues/1724/events
https://github.com/huggingface/datasets/issues/1724
784,023,338
MDU6SXNzdWU3ODQwMjMzMzg=
1,724
could not run models on a offline server successfully
{ "login": "lkcao", "id": 49967236, "node_id": "MDQ6VXNlcjQ5OTY3MjM2", "avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lkcao", "html_url": "https://github.com/lkcao", "followers_url": "https://api.github.com/users/lkcao/follow...
[]
open
false
null
[]
null
[ "Transferred to `datasets` based on the stack trace.", "Hi @lkcao !\r\nYour issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/...
1,610,431,686,000
1,614,785,549,000
null
NONE
null
Hi, I really need your help about this. I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows: ![image](https://us...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1724/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/1724/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1722
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1722/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1722/comments
https://api.github.com/repos/huggingface/datasets/issues/1722/events
https://github.com/huggingface/datasets/pull/1722
783,921,679
MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4
1,722
Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task.
{ "login": "mounicam", "id": 11708999, "node_id": "MDQ6VXNlcjExNzA4OTk5", "avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4", "gravatar_id": "", "url": "https://api.github.com/users/mounicam", "html_url": "https://github.com/mounicam", "followers_url": "https://api.github.com/users/mou...
[]
closed
false
null
[]
null
[ "The current version of Wiki-Auto dataset contains a filtered version of the aligned dataset. The commit adds unfiltered versions of the data that can be useful the GEM task participants." ]
1,610,429,164,000
1,610,475,293,000
1,610,472,957,000
CONTRIBUTOR
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1722/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1722/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1722", "html_url": "https://github.com/huggingface/datasets/pull/1722", "diff_url": "https://github.com/huggingface/datasets/pull/1722.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1722.patch", "merged_at": 1610472957000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1721
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1721/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1721/comments
https://api.github.com/repos/huggingface/datasets/issues/1721/events
https://github.com/huggingface/datasets/pull/1721
783,828,428
MDExOlB1bGxSZXF1ZXN0NTUzMTIyODQ5
1,721
[Scientific papers] Mirror datasets zip
{ "login": "patrickvonplaten", "id": 23423619, "node_id": "MDQ6VXNlcjIzNDIzNjE5", "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "gravatar_id": "", "url": "https://api.github.com/users/patrickvonplaten", "html_url": "https://github.com/patrickvonplaten", "followers_url": "https://...
[]
closed
false
null
[]
null
[ "> Nice !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip files ? they're quite big (300KB)\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically", "That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 toke...
1,610,414,140,000
1,610,452,155,000
1,610,451,707,000
MEMBER
null
Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1721/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1721/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1721", "html_url": "https://github.com/huggingface/datasets/pull/1721", "diff_url": "https://github.com/huggingface/datasets/pull/1721.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1721.patch", "merged_at": 1610451707000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1720
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1720/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1720/comments
https://api.github.com/repos/huggingface/datasets/issues/1720/events
https://github.com/huggingface/datasets/pull/1720
783,721,833
MDExOlB1bGxSZXF1ZXN0NTUzMDM0MzYx
1,720
Adding the NorNE dataset for NER
{ "login": "versae", "id": 173537, "node_id": "MDQ6VXNlcjE3MzUzNw==", "avatar_url": "https://avatars.githubusercontent.com/u/173537?v=4", "gravatar_id": "", "url": "https://api.github.com/users/versae", "html_url": "https://github.com/versae", "followers_url": "https://api.github.com/users/versae/follow...
[]
closed
false
null
[]
null
[ "Quick question, @lhoestq. In this specific dataset, two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. However, I have not found an easy...
1,610,400,853,000
1,617,200,629,000
1,617,199,997,000
CONTRIBUTOR
null
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1720/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1720/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1720", "html_url": "https://github.com/huggingface/datasets/pull/1720", "diff_url": "https://github.com/huggingface/datasets/pull/1720.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1720.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1719
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1719/comments
https://api.github.com/repos/huggingface/datasets/issues/1719/events
https://github.com/huggingface/datasets/pull/1719
783,557,542
MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4
1,719
Fix column list comparison in transmit format
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,385,836,000
1,610,390,703,000
1,610,390,702,000
MEMBER
null
As noticed in #1718 the cache might not reload the cache files when new columns were added. This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1719/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1719/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1719", "html_url": "https://github.com/huggingface/datasets/pull/1719", "diff_url": "https://github.com/huggingface/datasets/pull/1719.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1719.patch", "merged_at": 1610390702000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1718
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1718/comments
https://api.github.com/repos/huggingface/datasets/issues/1718/events
https://github.com/huggingface/datasets/issues/1718
783,474,753
MDU6SXNzdWU3ODM0NzQ3NTM=
1,718
Possible cache miss in datasets
{ "login": "ofirzaf", "id": 18296312, "node_id": "MDQ6VXNlcjE4Mjk2MzEy", "avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ofirzaf", "html_url": "https://github.com/ofirzaf", "followers_url": "https://api.github.com/users/ofirza...
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting l...
1,610,379,451,000
1,655,403,232,000
1,611,629,279,000
NONE
null
Hi, I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache. I have attached an example script that for me reproduces the problem. In the attached example the second map function always recomputes instead of loading fr...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions", "total_count": 2, "+1": 2, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1718/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1717
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1717/comments
https://api.github.com/repos/huggingface/datasets/issues/1717/events
https://github.com/huggingface/datasets/issues/1717
783,074,255
MDU6SXNzdWU3ODMwNzQyNTU=
1,717
SciFact dataset - minor changes
{ "login": "dwadden", "id": 3091916, "node_id": "MDQ6VXNlcjMwOTE5MTY=", "avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dwadden", "html_url": "https://github.com/dwadden", "followers_url": "https://api.github.com/users/dwadden/...
[]
closed
false
null
[]
null
[ "Hi Dave,\r\nYou are more than welcome to open a PR to make these changes! 🤗\r\nYou will find the relevant information about opening a PR in the [contributing guide](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) and in the [dataset addition guide](https://github.com/huggingface/datasets/blob...
1,610,342,800,000
1,611,629,537,000
1,611,629,537,000
CONTRIBUTOR
null
Hi, SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated! I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this? It also looks like the dataset is being downloa...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1717/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1717/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1716
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1716/comments
https://api.github.com/repos/huggingface/datasets/issues/1716/events
https://github.com/huggingface/datasets/pull/1716
782,819,006
MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5
1,716
Add Hatexplain Dataset
{ "login": "kushal2000", "id": 48222101, "node_id": "MDQ6VXNlcjQ4MjIyMTAx", "avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4", "gravatar_id": "", "url": "https://api.github.com/users/kushal2000", "html_url": "https://github.com/kushal2000", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,285,401,000
1,610,979,702,000
1,610,979,702,000
CONTRIBUTOR
null
Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1716/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1716", "html_url": "https://github.com/huggingface/datasets/pull/1716", "diff_url": "https://github.com/huggingface/datasets/pull/1716.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1716.patch", "merged_at": 1610979702000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1715
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1715/comments
https://api.github.com/repos/huggingface/datasets/issues/1715/events
https://github.com/huggingface/datasets/pull/1715
782,754,441
MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5
1,715
add Korean intonation-aided intention identification dataset
{ "login": "stevhliu", "id": 59462357, "node_id": "MDQ6VXNlcjU5NDYyMzU3", "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stevhliu", "html_url": "https://github.com/stevhliu", "followers_url": "https://api.github.com/users/ste...
[]
closed
false
null
[]
null
[]
1,610,260,144,000
1,631,897,653,000
1,610,471,673,000
MEMBER
null
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1715/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1715", "html_url": "https://github.com/huggingface/datasets/pull/1715", "diff_url": "https://github.com/huggingface/datasets/pull/1715.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1715.patch", "merged_at": 1610471672000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1714
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1714/comments
https://api.github.com/repos/huggingface/datasets/issues/1714/events
https://github.com/huggingface/datasets/pull/1714
782,416,276
MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0
1,714
Adding adversarialQA dataset
{ "login": "maxbartolo", "id": 15869827, "node_id": "MDQ6VXNlcjE1ODY5ODI3", "avatar_url": "https://avatars.githubusercontent.com/u/15869827?v=4", "gravatar_id": "", "url": "https://api.github.com/users/maxbartolo", "html_url": "https://github.com/maxbartolo", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Oh that's a really cool one, we'll review/merge it soon!\r\n\r\nIn the meantime, do you have any specific positive/negative feedback on the process of adding a datasets Max?\r\nDid you follow the instruction in the [detailed step-by-step](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)?", ...
1,610,142,369,000
1,610,553,924,000
1,610,553,924,000
CONTRIBUTOR
null
Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1714/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1714/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1714", "html_url": "https://github.com/huggingface/datasets/pull/1714", "diff_url": "https://github.com/huggingface/datasets/pull/1714.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1714.patch", "merged_at": 1610553924000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1713
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1713/comments
https://api.github.com/repos/huggingface/datasets/issues/1713/events
https://github.com/huggingface/datasets/issues/1713
782,337,723
MDU6SXNzdWU3ODIzMzc3MjM=
1,713
Installation using conda
{ "login": "pranav-s", "id": 9393002, "node_id": "MDQ6VXNlcjkzOTMwMDI=", "avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4", "gravatar_id": "", "url": "https://api.github.com/users/pranav-s", "html_url": "https://github.com/pranav-s", "followers_url": "https://api.github.com/users/prana...
[]
closed
false
null
[]
null
[ "Yes indeed the idea is to have the next release on conda cc @LysandreJik ", "Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.", "I think we can have `datasets` on conda by next week. Will see what I can do!", "Thank you. Lo...
1,610,133,135,000
1,631,882,860,000
1,631,882,860,000
NONE
null
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1713/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1712
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1712/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1712/comments
https://api.github.com/repos/huggingface/datasets/issues/1712/events
https://github.com/huggingface/datasets/pull/1712
782,313,097
MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4
1,712
Silicone
{ "login": "eusip", "id": 1551356, "node_id": "MDQ6VXNlcjE1NTEzNTY=", "avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4", "gravatar_id": "", "url": "https://api.github.com/users/eusip", "html_url": "https://github.com/eusip", "followers_url": "https://api.github.com/users/eusip/follower...
[]
closed
false
null
[]
null
[ "When should we expect to see our dataset appear in the search dropdown at huggingface.co?", "Hi @eusip,\r\n\r\n> When should we expect to see our dataset appear in the search dropdown at huggingface.co?\r\n\r\nwhen this PR is merged.", "Thanks!", "I've implemented all the changes requested by @lhoestq but I ...
1,610,130,258,000
1,611,238,357,000
1,611,225,071,000
CONTRIBUTOR
null
My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1712/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 1 }
https://api.github.com/repos/huggingface/datasets/issues/1712/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1712", "html_url": "https://github.com/huggingface/datasets/pull/1712", "diff_url": "https://github.com/huggingface/datasets/pull/1712.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1712.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1711/comments
https://api.github.com/repos/huggingface/datasets/issues/1711/events
https://github.com/huggingface/datasets/pull/1711
782,129,083
MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2
1,711
Fix windows path scheme in cached path
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,113,556,000
1,610,357,000,000
1,610,356,999,000
MEMBER
null
As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete. I fixed this and added tests
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1711/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1711", "html_url": "https://github.com/huggingface/datasets/pull/1711", "diff_url": "https://github.com/huggingface/datasets/pull/1711.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1711.patch", "merged_at": 1610356999000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1710
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1710/comments
https://api.github.com/repos/huggingface/datasets/issues/1710/events
https://github.com/huggingface/datasets/issues/1710
781,914,951
MDU6SXNzdWU3ODE5MTQ5NTE=
1,710
IsADirectoryError when trying to download C4
{ "login": "fredriko", "id": 5771366, "node_id": "MDQ6VXNlcjU3NzEzNjY=", "avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4", "gravatar_id": "", "url": "https://api.github.com/users/fredriko", "html_url": "https://github.com/fredriko", "followers_url": "https://api.github.com/users/fredr...
[]
open
false
null
[]
null
[ "I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files." ]
1,610,091,090,000
1,610,531,053,000
null
NONE
null
**TLDR**: I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure. How can the problem be fixed? **VERBOSE**: I use Python version 3.7 and have the following dependencies listed in my project: ``` datasets==1.2.0 apache-beam==2.26.0 ``` When runn...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1710/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1709
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1709/comments
https://api.github.com/repos/huggingface/datasets/issues/1709/events
https://github.com/huggingface/datasets/issues/1709
781,875,640
MDU6SXNzdWU3ODE4NzU2NDA=
1,709
Databases
{ "login": "JimmyJim1", "id": 68724553, "node_id": "MDQ6VXNlcjY4NzI0NTUz", "avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JimmyJim1", "html_url": "https://github.com/JimmyJim1", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[]
1,610,086,443,000
1,610,096,408,000
1,610,096,408,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1709/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1709/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1708
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1708/comments
https://api.github.com/repos/huggingface/datasets/issues/1708/events
https://github.com/huggingface/datasets/issues/1708
781,631,455
MDU6SXNzdWU3ODE2MzE0NTU=
1,708
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
{ "login": "Louiejay54", "id": 77126849, "node_id": "MDQ6VXNlcjc3MTI2ODQ5", "avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Louiejay54", "html_url": "https://github.com/Louiejay54", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,055,924,000
1,610,096,401,000
1,610,096,401,000
NONE
null
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1708/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1707
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1707/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1707/comments
https://api.github.com/repos/huggingface/datasets/issues/1707/events
https://github.com/huggingface/datasets/pull/1707
781,507,545
MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2
1,707
Added generated READMEs for datasets that were missing one.
{ "login": "madlag", "id": 272253, "node_id": "MDQ6VXNlcjI3MjI1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madlag", "html_url": "https://github.com/madlag", "followers_url": "https://api.github.com/users/madlag/follow...
[]
closed
false
null
[]
null
[ "Looks like we need to trim the ones with too many configs, will look into it tomorrow!" ]
1,610,043,006,000
1,610,980,353,000
1,610,980,353,000
CONTRIBUTOR
null
This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible. Code is available here for the moment: https://github.com/madlag/datasets...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1707/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 2, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1707/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1707", "html_url": "https://github.com/huggingface/datasets/pull/1707", "diff_url": "https://github.com/huggingface/datasets/pull/1707.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1707.patch", "merged_at": 1610980353000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1706/comments
https://api.github.com/repos/huggingface/datasets/issues/1706/events
https://github.com/huggingface/datasets/issues/1706
781,494,476
MDU6SXNzdWU3ODE0OTQ0NzY=
1,706
Error when downloading a large dataset on slow connection.
{ "login": "lucadiliello", "id": 23355969, "node_id": "MDQ6VXNlcjIzMzU1OTY5", "avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lucadiliello", "html_url": "https://github.com/lucadiliello", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
[ "Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=...
1,610,041,695,000
1,610,534,102,000
null
CONTRIBUTOR
null
I receive the following error after about an hour trying to download the `openwebtext` dataset. The code used is: ```python import datasets datasets.load_dataset("openwebtext") ``` > Traceback (most recent call last): ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1706/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1706/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1705
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1705/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1705/comments
https://api.github.com/repos/huggingface/datasets/issues/1705/events
https://github.com/huggingface/datasets/pull/1705
781,474,949
MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4
1,705
Add information about caching and verifications in "Load a Dataset" docs
{ "login": "SBrandeis", "id": 33657802, "node_id": "MDQ6VXNlcjMzNjU3ODAy", "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "gravatar_id": "", "url": "https://api.github.com/users/SBrandeis", "html_url": "https://github.com/SBrandeis", "followers_url": "https://api.github.com/users/...
[ { "id": 1935892861, "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation", "name": "documentation", "color": "0075ca", "default": true, "description": "Improvements or additions to documentation" } ]
closed
false
null
[]
null
[]
1,610,039,924,000
1,610,460,481,000
1,610,460,481,000
CONTRIBUTOR
null
Related to #215. Missing improvements from @lhoestq's #1703.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1705/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1705/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1705", "html_url": "https://github.com/huggingface/datasets/pull/1705", "diff_url": "https://github.com/huggingface/datasets/pull/1705.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1705.patch", "merged_at": 1610460481000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1704
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1704/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1704/comments
https://api.github.com/repos/huggingface/datasets/issues/1704/events
https://github.com/huggingface/datasets/pull/1704
781,402,757
MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1
1,704
Update XSUM Factuality DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,033,834,000
1,610,458,204,000
1,610,458,204,000
CONTRIBUTOR
null
Update XSUM Factuality DatasetCard
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1704/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1704/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1704", "html_url": "https://github.com/huggingface/datasets/pull/1704", "diff_url": "https://github.com/huggingface/datasets/pull/1704.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1704.patch", "merged_at": 1610458204000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1703
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1703/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1703/comments
https://api.github.com/repos/huggingface/datasets/issues/1703/events
https://github.com/huggingface/datasets/pull/1703
781,395,146
MDExOlB1bGxSZXF1ZXN0NTUxMTI2MjA5
1,703
Improvements regarding caching and fingerprinting
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "I few comments here for discussion:\r\n- I'm not convinced yet the end user should really have to understand the difference between \"caching\" and 'fingerprinting\", what do you think? I think fingerprinting should probably stay as an internal thing. Is there a case where we want cahing without fingerprinting or ...
1,610,033,189,000
1,611,077,531,000
1,611,077,530,000
MEMBER
null
This PR adds these features: - Enable/disable caching If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. It is equivalent to setting `load_from_cache` to `False` in dataset transforms. ```python from datasets import set_caching_enabled set_cach...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1703/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1703/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1703", "html_url": "https://github.com/huggingface/datasets/pull/1703", "diff_url": "https://github.com/huggingface/datasets/pull/1703.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1703.patch", "merged_at": 1611077530000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1702
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1702/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1702/comments
https://api.github.com/repos/huggingface/datasets/issues/1702/events
https://github.com/huggingface/datasets/pull/1702
781,383,277
MDExOlB1bGxSZXF1ZXN0NTUxMTE2NDc0
1,702
Fix importlib metdata import in py38
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,610,032,230,000
1,610,102,835,000
1,610,102,835,000
MEMBER
null
In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1702/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1702/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1702", "html_url": "https://github.com/huggingface/datasets/pull/1702", "diff_url": "https://github.com/huggingface/datasets/pull/1702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1702.patch", "merged_at": 1610102834000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1701
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1701/comments
https://api.github.com/repos/huggingface/datasets/issues/1701/events
https://github.com/huggingface/datasets/issues/1701
781,345,717
MDU6SXNzdWU3ODEzNDU3MTc=
1,701
Some datasets miss dataset_infos.json or dummy_data.zip
{ "login": "madlag", "id": 272253, "node_id": "MDQ6VXNlcjI3MjI1Mw==", "avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4", "gravatar_id": "", "url": "https://api.github.com/users/madlag", "html_url": "https://github.com/madlag", "followers_url": "https://api.github.com/users/madlag/follow...
[]
open
false
null
[]
null
[ "Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n" ]
1,610,029,033,000
1,610,458,846,000
null
CONTRIBUTOR
null
While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json : ``` c4 lm1b reclor wikihow ``` And some does not have a dummy_data.zip : ``` kor_nli math_dataset mlqa ms_marco newsgroup qa4mre qanga...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1701/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1701/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1700
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1700/comments
https://api.github.com/repos/huggingface/datasets/issues/1700/events
https://github.com/huggingface/datasets/pull/1700
781,333,589
MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2
1,700
Update Curiosity dialogs DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[]
1,610,027,967,000
1,610,477,492,000
1,610,477,492,000
CONTRIBUTOR
null
Update Curiosity dialogs DatasetCard There are some entries in the data fields section yet to be filled. There is little information regarding those fields.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1700/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1700/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1700", "html_url": "https://github.com/huggingface/datasets/pull/1700", "diff_url": "https://github.com/huggingface/datasets/pull/1700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1700.patch", "merged_at": 1610477492000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1699/comments
https://api.github.com/repos/huggingface/datasets/issues/1699/events
https://github.com/huggingface/datasets/pull/1699
781,271,558
MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5
1,699
Update DBRD dataset card and download URL
{ "login": "benjaminvdb", "id": 8875786, "node_id": "MDQ6VXNlcjg4NzU3ODY=", "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "gravatar_id": "", "url": "https://api.github.com/users/benjaminvdb", "html_url": "https://github.com/benjaminvdb", "followers_url": "https://api.github.com/us...
[]
closed
false
null
[]
null
[ "not sure why the CI was not triggered though" ]
1,610,021,803,000
1,610,026,899,000
1,610,026,859,000
CONTRIBUTOR
null
I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes: 1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316. 2. I've updated the dataset card. Cheers! 😄
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1699/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1699/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1699", "html_url": "https://github.com/huggingface/datasets/pull/1699", "diff_url": "https://github.com/huggingface/datasets/pull/1699.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1699.patch", "merged_at": 1610026859000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1698
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1698/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1698/comments
https://api.github.com/repos/huggingface/datasets/issues/1698/events
https://github.com/huggingface/datasets/pull/1698
781,152,561
MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3
1,698
Update Coached Conv Pref DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Really cool!\r\n\r\nCan you add some task tags for `dialogue-modeling` (under `sequence-modeling`) and `parsing` (under `structured-prediction`)?" ]
1,610,010,436,000
1,610,125,473,000
1,610,125,472,000
CONTRIBUTOR
null
Update Coached Conversation Preferance DatasetCard
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1698/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1698/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1698", "html_url": "https://github.com/huggingface/datasets/pull/1698", "diff_url": "https://github.com/huggingface/datasets/pull/1698.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1698.patch", "merged_at": 1610125472000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1697
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1697/comments
https://api.github.com/repos/huggingface/datasets/issues/1697/events
https://github.com/huggingface/datasets/pull/1697
781,126,579
MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5
1,697
Update DialogRE DatasetCard
{ "login": "vineeths96", "id": 50873201, "node_id": "MDQ6VXNlcjUwODczMjAx", "avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4", "gravatar_id": "", "url": "https://api.github.com/users/vineeths96", "html_url": "https://github.com/vineeths96", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?" ]
1,610,007,753,000
1,610,026,468,000
1,610,026,468,000
CONTRIBUTOR
null
Update the information in the dataset card for the Dialog RE dataset.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1697/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1697", "html_url": "https://github.com/huggingface/datasets/pull/1697", "diff_url": "https://github.com/huggingface/datasets/pull/1697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1697.patch", "merged_at": 1610026468000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1696
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1696/comments
https://api.github.com/repos/huggingface/datasets/issues/1696/events
https://github.com/huggingface/datasets/issues/1696
781,096,918
MDU6SXNzdWU3ODEwOTY5MTg=
1,696
Unable to install datasets
{ "login": "glee2429", "id": 12635475, "node_id": "MDQ6VXNlcjEyNjM1NDc1", "avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4", "gravatar_id": "", "url": "https://api.github.com/users/glee2429", "html_url": "https://github.com/glee2429", "followers_url": "https://api.github.com/users/gle...
[]
closed
false
null
[]
null
[ "Maybe try to create a virtual env with python 3.8 or 3.7", "Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ", "Damn sorry", "Damn sorry" ]
1,610,004,277,000
1,610,065,985,000
1,610,057,165,000
NONE
null
** Edit ** I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight! **Short description** I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions", "total_count": 1, "+1": 1, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1696/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1695
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1695/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1695/comments
https://api.github.com/repos/huggingface/datasets/issues/1695/events
https://github.com/huggingface/datasets/pull/1695
780,971,987
MDExOlB1bGxSZXF1ZXN0NTUwNzc1OTU4
1,695
fix ner_tag bugs in thainer
{ "login": "cstorm125", "id": 15519308, "node_id": "MDQ6VXNlcjE1NTE5MzA4", "avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cstorm125", "html_url": "https://github.com/cstorm125", "followers_url": "https://api.github.com/users/...
[]
closed
false
null
[]
null
[ "> Thanks :)\r\n> \r\n> Apparently the dummy_data.zip got removed. Is this expected ?\r\n> Also can you remove the `data-pos.conll` file that you added ?\r\n\r\nNot expected. I forgot to remove the `dummy_data` folder used to create `dummy_data.zip`. \r\nChanged to only `dummy_data.zip`." ]
1,609,985,553,000
1,610,030,625,000
1,610,030,608,000
CONTRIBUTOR
null
fix bug that results in `ner_tag` always equal to 'O'.
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1695/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1695/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1695", "html_url": "https://github.com/huggingface/datasets/pull/1695", "diff_url": "https://github.com/huggingface/datasets/pull/1695.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1695.patch", "merged_at": 1610030608000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1694
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1694/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1694/comments
https://api.github.com/repos/huggingface/datasets/issues/1694/events
https://github.com/huggingface/datasets/pull/1694
780,429,080
MDExOlB1bGxSZXF1ZXN0NTUwMzI0Mjcx
1,694
Add OSCAR
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[ "Hi @lhoestq, on the OSCAR dataset, the document boundaries are defined by an empty line. Are there any chances to keep this empty line or explicitly group the sentences of a document? I'm asking for this 'cause I need to know if some sentences belong to the same document on my current OSCAR dataset usage.", "Ind...
1,609,928,468,000
1,611,565,833,000
1,611,565,832,000
MEMBER
null
Continuation of #348 The files have been moved to S3 and only the unshuffled version is available. Both original and deduplicated versions of each language are available. Example of usage: ```python from datasets import load_dataset oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1694/reactions", "total_count": 2, "+1": 0, "-1": 0, "laugh": 0, "hooray": 2, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1694/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1694", "html_url": "https://github.com/huggingface/datasets/pull/1694", "diff_url": "https://github.com/huggingface/datasets/pull/1694.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1694.patch", "merged_at": 1611565832000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1693/comments
https://api.github.com/repos/huggingface/datasets/issues/1693/events
https://github.com/huggingface/datasets/pull/1693
780,268,595
MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx
1,693
Fix reuters metadata parsing errors
{ "login": "jbragg", "id": 2238344, "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "gravatar_id": "", "url": "https://api.github.com/users/jbragg", "html_url": "https://github.com/jbragg", "followers_url": "https://api.github.com/users/jbragg/foll...
[]
closed
false
null
[]
null
[]
1,609,921,563,000
1,610,063,627,000
1,610,028,082,000
CONTRIBUTOR
null
Was missing the last entry in each metadata category
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1693/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1693/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1693", "html_url": "https://github.com/huggingface/datasets/pull/1693", "diff_url": "https://github.com/huggingface/datasets/pull/1693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1693.patch", "merged_at": 1610028082000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1691
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1691/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1691/comments
https://api.github.com/repos/huggingface/datasets/issues/1691/events
https://github.com/huggingface/datasets/pull/1691
779,882,271
MDExOlB1bGxSZXF1ZXN0NTQ5ODE3NTM0
1,691
Updated HuggingFace Datasets README (fix typos)
{ "login": "8bitmp3", "id": 19637339, "node_id": "MDQ6VXNlcjE5NjM3MzM5", "avatar_url": "https://avatars.githubusercontent.com/u/19637339?v=4", "gravatar_id": "", "url": "https://api.github.com/users/8bitmp3", "html_url": "https://github.com/8bitmp3", "followers_url": "https://api.github.com/users/8bitmp...
[]
closed
false
null
[]
null
[]
1,609,899,278,000
1,610,839,847,000
1,610,013,992,000
CONTRIBUTOR
null
Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps. ![](https://emojipedia-us.s3.dualstack.us-west-1.amazonaws.com/thumbs/160/google/56/hugging-face_1f917.png)
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1691/reactions", "total_count": 1, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 1, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1691/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1691", "html_url": "https://github.com/huggingface/datasets/pull/1691", "diff_url": "https://github.com/huggingface/datasets/pull/1691.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1691.patch", "merged_at": 1610013992000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1690
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1690/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1690/comments
https://api.github.com/repos/huggingface/datasets/issues/1690/events
https://github.com/huggingface/datasets/pull/1690
779,441,631
MDExOlB1bGxSZXF1ZXN0NTQ5NDEwOTgw
1,690
Fast start up
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,873,673,000
1,609,942,859,000
1,609,942,858,000
MEMBER
null
Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies. To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1690/reactions", "total_count": 3, "+1": 0, "-1": 0, "laugh": 0, "hooray": 3, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1690/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1690", "html_url": "https://github.com/huggingface/datasets/pull/1690", "diff_url": "https://github.com/huggingface/datasets/pull/1690.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1690.patch", "merged_at": 1609942858000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1689
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1689/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1689/comments
https://api.github.com/repos/huggingface/datasets/issues/1689/events
https://github.com/huggingface/datasets/pull/1689
779,107,313
MDExOlB1bGxSZXF1ZXN0NTQ5MTEwMDgw
1,689
Fix ade_corpus_v2 config names
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,857,208,000
1,609,858,509,000
1,609,858,508,000
MEMBER
null
There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them: - Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification - Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation - Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1689/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1689/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1689", "html_url": "https://github.com/huggingface/datasets/pull/1689", "diff_url": "https://github.com/huggingface/datasets/pull/1689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1689.patch", "merged_at": 1609858508000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1688
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1688/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1688/comments
https://api.github.com/repos/huggingface/datasets/issues/1688/events
https://github.com/huggingface/datasets/pull/1688
779,029,685
MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0
1,688
Fix DaNE last example
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,853,377,000
1,609,855,215,000
1,609,855,213,000
MEMBER
null
The last example from the DaNE dataset is empty. Fix #1686
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1688/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1688/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1688", "html_url": "https://github.com/huggingface/datasets/pull/1688", "diff_url": "https://github.com/huggingface/datasets/pull/1688.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1688.patch", "merged_at": 1609855213000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1687
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1687/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1687/comments
https://api.github.com/repos/huggingface/datasets/issues/1687/events
https://github.com/huggingface/datasets/issues/1687
779,004,894
MDU6SXNzdWU3NzkwMDQ4OTQ=
1,687
Question: Shouldn't .info be a part of DatasetDict?
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https...
[]
open
false
null
[]
null
[ "We could do something. There is a part of `.info` which is split specific (cache files, split instructions) but maybe if could be made to work.", "Yes this was kinda the idea I was going for. DatasetDict.info would be the shared info amongs the datasets (maybe even some info on how they differ). " ]
1,609,852,121,000
1,610,014,686,000
null
CONTRIBUTOR
null
Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets. For instance: ``` >>> ds = datasets.load_dataset("conll2002", "es") >>> ds.info Traceback (most rece...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1687/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1687/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1686
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1686/comments
https://api.github.com/repos/huggingface/datasets/issues/1686/events
https://github.com/huggingface/datasets/issues/1686
778,921,684
MDU6SXNzdWU3Nzg5MjE2ODQ=
1,686
Dataset Error: DaNE contains empty samples at the end
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https...
[]
closed
false
null
[]
null
[ "Thanks for reporting, I opened a PR to fix that", "One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\...
1,609,847,666,000
1,609,855,269,000
1,609,855,213,000
CONTRIBUTOR
null
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors. ```python >>> import datasets [...] >>> dataset = datasets.load_dataset("dane") [...] >>> dataset["test"][-1] {'dep_ids': [], 'dep_labels': ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1686/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1685
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1685/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1685/comments
https://api.github.com/repos/huggingface/datasets/issues/1685/events
https://github.com/huggingface/datasets/pull/1685
778,914,431
MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2
1,685
Update README.md of covid-tweets-japanese
{ "login": "forest1988", "id": 2755894, "node_id": "MDQ6VXNlcjI3NTU4OTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4", "gravatar_id": "", "url": "https://api.github.com/users/forest1988", "html_url": "https://github.com/forest1988", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Thanks for reviewing and merging!" ]
1,609,847,247,000
1,609,928,832,000
1,609,925,470,000
CONTRIBUTOR
null
Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402. - Update "Data Splits" to be more precise that no information is provided for now. - old: [More Information Needed] - new: No information about data spl...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1685/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1685/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1685", "html_url": "https://github.com/huggingface/datasets/pull/1685", "diff_url": "https://github.com/huggingface/datasets/pull/1685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1685.patch", "merged_at": 1609925470000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1684
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1684/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1684/comments
https://api.github.com/repos/huggingface/datasets/issues/1684/events
https://github.com/huggingface/datasets/pull/1684
778,356,196
MDExOlB1bGxSZXF1ZXN0NTQ4NDU3NDY1
1,684
Add CANER Corpus
{ "login": "KMFODA", "id": 35491698, "node_id": "MDQ6VXNlcjM1NDkxNjk4", "avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KMFODA", "html_url": "https://github.com/KMFODA", "followers_url": "https://api.github.com/users/KMFODA/fo...
[]
closed
false
null
[]
null
[]
1,609,793,351,000
1,611,565,760,000
1,611,565,760,000
CONTRIBUTOR
null
What does this PR do? Adds the following dataset: https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus Who can review? @lhoestq
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1684/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1684/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1684", "html_url": "https://github.com/huggingface/datasets/pull/1684", "diff_url": "https://github.com/huggingface/datasets/pull/1684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1684.patch", "merged_at": 1611565760000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1683
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1683/comments
https://api.github.com/repos/huggingface/datasets/issues/1683/events
https://github.com/huggingface/datasets/issues/1683
778,287,612
MDU6SXNzdWU3NzgyODc2MTI=
1,683
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
{ "login": "abarbosa94", "id": 6608232, "node_id": "MDQ6VXNlcjY2MDgyMzI=", "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abarbosa94", "html_url": "https://github.com/abarbosa94", "followers_url": "https://api.github.com/users...
[]
closed
false
null
[]
null
[ "Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]...
1,609,786,073,000
1,609,787,085,000
1,609,787,085,000
CONTRIBUTOR
null
It seems to fail the final batch ): steps to reproduce: ``` from datasets import load_dataset from elasticsearch import Elasticsearch import torch from transformers import file_utils, set_seed from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast MAX_SEQ_LENGTH = 256 ctx_encoder = DPRCon...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1683/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1682
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1682/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1682/comments
https://api.github.com/repos/huggingface/datasets/issues/1682/events
https://github.com/huggingface/datasets/pull/1682
778,268,156
MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1
1,682
Don't use xlrd for xlsx files
{ "login": "lhoestq", "id": 42851186, "node_id": "MDQ6VXNlcjQyODUxMTg2", "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lhoestq", "html_url": "https://github.com/lhoestq", "followers_url": "https://api.github.com/users/lhoest...
[]
closed
false
null
[]
null
[]
1,609,783,910,000
1,609,783,994,000
1,609,783,993,000
MEMBER
null
Since the latest release of `xlrd` (2.0), the support for xlsx files stopped. Therefore we needed to use something else. A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`. I left the unused import of `openpyxl` in the dataset scripts to show users that ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1682/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1682/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1682", "html_url": "https://github.com/huggingface/datasets/pull/1682", "diff_url": "https://github.com/huggingface/datasets/pull/1682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1682.patch", "merged_at": 1609783993000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1681
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1681/comments
https://api.github.com/repos/huggingface/datasets/issues/1681/events
https://github.com/huggingface/datasets/issues/1681
777,644,163
MDU6SXNzdWU3Nzc2NDQxNjM=
1,681
Dataset "dane" missing
{ "login": "KennethEnevoldsen", "id": 23721977, "node_id": "MDQ6VXNlcjIzNzIxOTc3", "avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4", "gravatar_id": "", "url": "https://api.github.com/users/KennethEnevoldsen", "html_url": "https://github.com/KennethEnevoldsen", "followers_url": "https...
[]
closed
false
null
[]
null
[ "Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip i...
1,609,682,583,000
1,609,835,735,000
1,609,835,713,000
CONTRIBUTOR
null
the `dane` dataset appear to be missing in the latest version (1.1.3). ```python >>> import datasets >>> datasets.__version__ '1.1.3' >>> "dane" in datasets.list_datasets() True ``` As we can see it should be present, but doesn't seem to be findable when using `load_dataset`. ```python >>> datasets.load...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1681/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1681/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1680
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1680/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1680/comments
https://api.github.com/repos/huggingface/datasets/issues/1680/events
https://github.com/huggingface/datasets/pull/1680
777,623,053
MDExOlB1bGxSZXF1ZXN0NTQ3ODY4MjEw
1,680
added TurkishProductReviews dataset
{ "login": "basakbuluz", "id": 41359672, "node_id": "MDQ6VXNlcjQxMzU5Njcy", "avatar_url": "https://avatars.githubusercontent.com/u/41359672?v=4", "gravatar_id": "", "url": "https://api.github.com/users/basakbuluz", "html_url": "https://github.com/basakbuluz", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
[ "@lhoestq, can you please review this PR?", "Thanks for the suggestions. Updates were made and dataset_infos.json file was created again." ]
1,609,674,779,000
1,609,784,135,000
1,609,784,135,000
CONTRIBUTOR
null
This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**. - **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data) - **Point of Contact:** Fatih Barmanbay - @fthbrmnby
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1680/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1680/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1680", "html_url": "https://github.com/huggingface/datasets/pull/1680", "diff_url": "https://github.com/huggingface/datasets/pull/1680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1680.patch", "merged_at": 1609784135000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1679
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1679/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1679/comments
https://api.github.com/repos/huggingface/datasets/issues/1679/events
https://github.com/huggingface/datasets/issues/1679
777,587,792
MDU6SXNzdWU3Nzc1ODc3OTI=
1,679
Can't import cc100 dataset
{ "login": "alighofrani95", "id": 14968123, "node_id": "MDQ6VXNlcjE0OTY4MTIz", "avatar_url": "https://avatars.githubusercontent.com/u/14968123?v=4", "gravatar_id": "", "url": "https://api.github.com/users/alighofrani95", "html_url": "https://github.com/alighofrani95", "followers_url": "https://api.githu...
[]
open
false
null
[]
null
[ "cc100 was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `cc100` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlang = \"en\"\r\ndataset = load_dataset(\"cc100\", la...
1,609,657,976,000
1,609,785,698,000
null
NONE
null
There is some issue to import cc100 dataset. ``` from datasets import load_dataset dataset = load_dataset("cc100") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py During handling of the above exception, another exception occur...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1679/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1679/timeline
null
null
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1678
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1678/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1678/comments
https://api.github.com/repos/huggingface/datasets/issues/1678/events
https://github.com/huggingface/datasets/pull/1678
777,567,920
MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy
1,678
Switchboard Dialog Act Corpus added under `datasets/swda`
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmi...
[]
closed
false
null
[]
null
[ "@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.", "It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ", "Hi @lhoestq,\r\nI'...
1,609,646,021,000
1,610,129,361,000
1,609,841,195,000
CONTRIBUTOR
null
Switchboard Dialog Act Corpus Intro: The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2, with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the associated turn. The SwDA project was undertaken at UC ...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1678/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1678/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1678", "html_url": "https://github.com/huggingface/datasets/pull/1678", "diff_url": "https://github.com/huggingface/datasets/pull/1678.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1678.patch", "merged_at": 1609841195000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1677
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1677/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1677/comments
https://api.github.com/repos/huggingface/datasets/issues/1677/events
https://github.com/huggingface/datasets/pull/1677
777,553,383
MDExOlB1bGxSZXF1ZXN0NTQ3ODE3ODI1
1,677
Switchboard Dialog Act Corpus added under `datasets/swda`
{ "login": "gmihaila", "id": 22454783, "node_id": "MDQ6VXNlcjIyNDU0Nzgz", "avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gmihaila", "html_url": "https://github.com/gmihaila", "followers_url": "https://api.github.com/users/gmi...
[]
closed
false
null
[]
null
[ "Need to fix code formatting." ]
1,609,636,602,000
1,609,642,557,000
1,609,642,556,000
CONTRIBUTOR
null
Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**. I think this is an important datasets to be added since it is the only one related to dialogue act classification. Hope the pull request is ok. Wasn't able to see any special formatting for the pull request form. The Swi...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1677/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1677/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1677", "html_url": "https://github.com/huggingface/datasets/pull/1677", "diff_url": "https://github.com/huggingface/datasets/pull/1677.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1677.patch", "merged_at": null }
true
https://api.github.com/repos/huggingface/datasets/issues/1676
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1676/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1676/comments
https://api.github.com/repos/huggingface/datasets/issues/1676/events
https://github.com/huggingface/datasets/pull/1676
777,477,645
MDExOlB1bGxSZXF1ZXN0NTQ3NzY1OTY3
1,676
new version of Ted Talks IWSLT (WIT3)
{ "login": "skyprince999", "id": 9033954, "node_id": "MDQ6VXNlcjkwMzM5NTQ=", "avatar_url": "https://avatars.githubusercontent.com/u/9033954?v=4", "gravatar_id": "", "url": "https://api.github.com/users/skyprince999", "html_url": "https://github.com/skyprince999", "followers_url": "https://api.github.com...
[]
closed
false
null
[]
null
[ "> Nice thank you ! Actually as it is a translation dataset we should probably have one configuration = one language pair no ?\r\n> \r\n> Could you use the same trick for this dataset ?\r\n\r\nI was looking for this input, infact I had written a long post on the Slack channel,...(_but unfortunately due to the holid...
1,609,601,403,000
1,610,619,019,000
1,610,619,019,000
CONTRIBUTOR
null
In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!! Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually. Locally I was able to clear the `python dataset...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1676/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1676/timeline
null
null
false
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1676", "html_url": "https://github.com/huggingface/datasets/pull/1676", "diff_url": "https://github.com/huggingface/datasets/pull/1676.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1676.patch", "merged_at": 1610619019000 }
true
https://api.github.com/repos/huggingface/datasets/issues/1675
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1675/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1675/comments
https://api.github.com/repos/huggingface/datasets/issues/1675/events
https://github.com/huggingface/datasets/issues/1675
777,367,320
MDU6SXNzdWU3NzczNjczMjA=
1,675
Add the 800GB Pile dataset?
{ "login": "lewtun", "id": 26859204, "node_id": "MDQ6VXNlcjI2ODU5MjA0", "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "gravatar_id": "", "url": "https://api.github.com/users/lewtun", "html_url": "https://github.com/lewtun", "followers_url": "https://api.github.com/users/lewtun/fo...
[ { "id": 2067376369, "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request", "name": "dataset request", "color": "e99695", "default": false, "description": "Requesting to add a new dataset" } ]
closed
false
{ "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_url": "https://api.g...
[ { "login": "albertvillanova", "id": 8515462, "node_id": "MDQ6VXNlcjg1MTU0NjI=", "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "gravatar_id": "", "url": "https://api.github.com/users/albertvillanova", "html_url": "https://github.com/albertvillanova", "followers_...
null
[ "The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models", "The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo. \...
1,609,541,892,000
1,638,372,547,000
1,638,372,547,000
MEMBER
null
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1675/reactions", "total_count": 12, "+1": 4, "-1": 0, "laugh": 0, "hooray": 0, "confused": 1, "heart": 0, "rocket": 5, "eyes": 2 }
https://api.github.com/repos/huggingface/datasets/issues/1675/timeline
null
completed
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1674
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1674/comments
https://api.github.com/repos/huggingface/datasets/issues/1674/events
https://github.com/huggingface/datasets/issues/1674
777,321,840
MDU6SXNzdWU3NzczMjE4NDA=
1,674
dutch_social can't be loaded
{ "login": "koenvandenberge", "id": 10134844, "node_id": "MDQ6VXNlcjEwMTM0ODQ0", "avatar_url": "https://avatars.githubusercontent.com/u/10134844?v=4", "gravatar_id": "", "url": "https://api.github.com/users/koenvandenberge", "html_url": "https://github.com/koenvandenberge", "followers_url": "https://api...
[]
open
false
null
[]
null
[ "exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n", "Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the...
1,609,522,628,000
1,609,841,821,000
null
NONE
null
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
{ "url": "https://api.github.com/repos/huggingface/datasets/issues/1674/reactions", "total_count": 0, "+1": 0, "-1": 0, "laugh": 0, "hooray": 0, "confused": 0, "heart": 0, "rocket": 0, "eyes": 0 }
https://api.github.com/repos/huggingface/datasets/issues/1674/timeline
null
null
null
null
false