url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.13B | node_id stringlengths 18 32 | number int64 1 3.71k | title stringlengths 1 276 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone dict | comments int64 0 42 | created_at timestamp[s] | updated_at timestamp[s] | closed_at timestamp[s] | author_association stringclasses 3
values | active_lock_reason null | draft bool 2
classes | pull_request dict | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1780/comments | https://api.github.com/repos/huggingface/datasets/issues/1780/events | https://github.com/huggingface/datasets/pull/1780 | 793,882,132 | MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy | 1,780 | Update SciFact URL | {
"login": "dwadden",
"id": 3091916,
"node_id": "MDQ6VXNlcjMwOTE5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwadden",
"html_url": "https://github.com/dwadden",
"followers_url": "https://api.github.com/users/dwadden/... | [] | closed | false | null | [] | null | 7 | 2021-01-26T02:49:06 | 2021-01-28T18:48:00 | 2021-01-28T10:19:45 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1780",
"html_url": "https://github.com/huggingface/datasets/pull/1780",
"diff_url": "https://github.com/huggingface/datasets/pull/1780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1780.patch",
"merged_at": "2021-01-28T10:19... | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1780/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1779 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1779/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1779/comments | https://api.github.com/repos/huggingface/datasets/issues/1779/events | https://github.com/huggingface/datasets/pull/1779 | 793,539,703 | MDExOlB1bGxSZXF1ZXN0NTYxMjEwNjI5 | 1,779 | Ignore definition line number of functions for caching | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-25T16:42:29 | 2021-01-26T10:20:20 | 2021-01-26T10:20:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1779",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"merged_at": "2021-01-26T10:20... | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition f... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1779/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1779/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1778 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1778/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1778/comments | https://api.github.com/repos/huggingface/datasets/issues/1778/events | https://github.com/huggingface/datasets/pull/1778 | 793,474,507 | MDExOlB1bGxSZXF1ZXN0NTYxMTU2Mzk1 | 1,778 | Narrative QA Manual | {
"login": "rsanjaykamath",
"id": 18527321,
"node_id": "MDQ6VXNlcjE4NTI3MzIx",
"avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rsanjaykamath",
"html_url": "https://github.com/rsanjaykamath",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 6 | 2021-01-25T15:22:31 | 2021-01-29T09:35:14 | 2021-01-29T09:34:51 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1778",
"html_url": "https://github.com/huggingface/datasets/pull/1778",
"diff_url": "https://github.com/huggingface/datasets/pull/1778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1778.patch",
"merged_at": "2021-01-29T09:34... | Submitting the manual version of Narrative QA script which requires a manual download from the original repository | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1778/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1778/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1777/comments | https://api.github.com/repos/huggingface/datasets/issues/1777/events | https://github.com/huggingface/datasets/issues/1777 | 793,273,770 | MDU6SXNzdWU3OTMyNzM3NzA= | 1,777 | GPT2 MNLI training using run_glue.py | {
"login": "nlp-student",
"id": 76427077,
"node_id": "MDQ6VXNlcjc2NDI3MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/76427077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nlp-student",
"html_url": "https://github.com/nlp-student",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 0 | 2021-01-25T10:53:52 | 2021-01-25T11:12:53 | 2021-01-25T11:12:53 | NONE | null | null | null | Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`
Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1777/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1777/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1776/comments | https://api.github.com/repos/huggingface/datasets/issues/1776/events | https://github.com/huggingface/datasets/issues/1776 | 792,755,249 | MDU6SXNzdWU3OTI3NTUyNDk= | 1,776 | [Question & Bug Report] Can we preprocess a dataset on the fly? | {
"login": "shuaihuaiyi",
"id": 14048129,
"node_id": "MDQ6VXNlcjE0MDQ4MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuaihuaiyi",
"html_url": "https://github.com/shuaihuaiyi",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 6 | 2021-01-24T09:28:24 | 2021-05-20T04:15:58 | 2021-05-20T04:15:58 | NONE | null | null | null | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_si... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1776/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1775/comments | https://api.github.com/repos/huggingface/datasets/issues/1775/events | https://github.com/huggingface/datasets/issues/1775 | 792,742,120 | MDU6SXNzdWU3OTI3NDIxMjA= | 1,775 | Efficient ways to iterate the dataset | {
"login": "zhongpeixiang",
"id": 11826803,
"node_id": "MDQ6VXNlcjExODI2ODAz",
"avatar_url": "https://avatars.githubusercontent.com/u/11826803?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zhongpeixiang",
"html_url": "https://github.com/zhongpeixiang",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | 2 | 2021-01-24T07:54:31 | 2021-01-24T09:50:39 | 2021-01-24T09:50:39 | CONTRIBUTOR | null | null | null | For a large dataset that does not fits the memory, how can I select only a subset of features from each example?
If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?
Thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1775/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1775/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1774 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1774/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1774/comments | https://api.github.com/repos/huggingface/datasets/issues/1774/events | https://github.com/huggingface/datasets/issues/1774 | 792,730,559 | MDU6SXNzdWU3OTI3MzA1NTk= | 1,774 | is it possible to make slice to be more compatible like python list and numpy? | {
"login": "world2vec",
"id": 7607120,
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/world2vec",
"html_url": "https://github.com/world2vec",
"followers_url": "https://api.github.com/users/wo... | [] | open | false | null | [] | null | 2 | 2021-01-24T06:15:52 | 2021-01-24T23:36:18 | null | NONE | null | null | null | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1774/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1774/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1773/comments | https://api.github.com/repos/huggingface/datasets/issues/1773/events | https://github.com/huggingface/datasets/issues/1773 | 792,708,160 | MDU6SXNzdWU3OTI3MDgxNjA= | 1,773 | bug in loading datasets | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | closed | false | null | [] | null | 3 | 2021-01-24T02:53:45 | 2021-09-06T08:54:46 | 2021-08-04T18:13:01 | NONE | null | null | null | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1773/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1773/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1772/comments | https://api.github.com/repos/huggingface/datasets/issues/1772/events | https://github.com/huggingface/datasets/issues/1772 | 792,703,797 | MDU6SXNzdWU3OTI3MDM3OTc= | 1,772 | Adding SICK dataset | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | 0 | 2021-01-24T02:15:31 | 2021-02-05T15:49:25 | 2021-02-05T15:49:25 | NONE | null | null | null | Hi
It would be great to include SICK dataset.
## Adding a Dataset
- **Name:** SICK
- **Description:** a well known entailment dataset
- **Paper:** http://marcobaroni.org/composes/sick.html
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** this is an important NLI benchmark
Instruction... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1772/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1772/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1771/comments | https://api.github.com/repos/huggingface/datasets/issues/1771/events | https://github.com/huggingface/datasets/issues/1771 | 792,701,276 | MDU6SXNzdWU3OTI3MDEyNzY= | 1,771 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | {
"login": "world2vec",
"id": 7607120,
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/world2vec",
"html_url": "https://github.com/world2vec",
"followers_url": "https://api.github.com/users/wo... | [] | closed | false | null | [] | null | 3 | 2021-01-24T01:53:52 | 2021-01-24T23:06:29 | 2021-01-24T23:06:29 | NONE | null | null | null | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-p... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1771/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1771/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1770/comments | https://api.github.com/repos/huggingface/datasets/issues/1770/events | https://github.com/huggingface/datasets/issues/1770 | 792,698,148 | MDU6SXNzdWU3OTI2OTgxNDg= | 1,770 | how can I combine 2 dataset with different/same features? | {
"login": "world2vec",
"id": 7607120,
"node_id": "MDQ6VXNlcjc2MDcxMjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7607120?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/world2vec",
"html_url": "https://github.com/world2vec",
"followers_url": "https://api.github.com/users/wo... | [] | open | false | null | [] | null | 2 | 2021-01-24T01:26:06 | 2021-01-24T23:43:54 | null | NONE | null | null | null | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1770/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1769/comments | https://api.github.com/repos/huggingface/datasets/issues/1769/events | https://github.com/huggingface/datasets/issues/1769 | 792,523,284 | MDU6SXNzdWU3OTI1MjMyODQ= | 1,769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | {
"login": "shuaihuaiyi",
"id": 14048129,
"node_id": "MDQ6VXNlcjE0MDQ4MTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/14048129?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shuaihuaiyi",
"html_url": "https://github.com/shuaihuaiyi",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 4 | 2021-01-23T10:13:00 | 2021-01-25T10:23:57 | null | NONE | null | null | null | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1769/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1769/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1768/comments | https://api.github.com/repos/huggingface/datasets/issues/1768/events | https://github.com/huggingface/datasets/pull/1768 | 792,150,745 | MDExOlB1bGxSZXF1ZXN0NTYwMDgyNzIx | 1,768 | Mention kwargs in the Dataset Formatting docs | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2021-01-22T16:43:20 | 2021-01-31T12:33:10 | 2021-01-25T09:14:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1768",
"html_url": "https://github.com/huggingface/datasets/pull/1768",
"diff_url": "https://github.com/huggingface/datasets/pull/1768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1768.patch",
"merged_at": "2021-01-25T09:14... | Hi,
This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed.
To prevent people from having to check the code/method docs, I just added a couple of lines in the docs.
Please let me know your thoughts on this.
Thanks,
Gunjan
@lho... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1768/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1768/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1767/comments | https://api.github.com/repos/huggingface/datasets/issues/1767/events | https://github.com/huggingface/datasets/pull/1767 | 792,068,497 | MDExOlB1bGxSZXF1ZXN0NTYwMDE2MzE2 | 1,767 | Add Librispeech ASR | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | 1 | 2021-01-22T14:54:37 | 2021-01-25T20:38:07 | 2021-01-25T20:37:42 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1767",
"html_url": "https://github.com/huggingface/datasets/pull/1767",
"diff_url": "https://github.com/huggingface/datasets/pull/1767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1767.patch",
"merged_at": "2021-01-25T20:37... | This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech
There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360".
As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1767/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1767/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1766/comments | https://api.github.com/repos/huggingface/datasets/issues/1766/events | https://github.com/huggingface/datasets/issues/1766 | 792,044,105 | MDU6SXNzdWU3OTIwNDQxMDU= | 1,766 | Issues when run two programs compute the same metrics | {
"login": "lamthuy",
"id": 8089862,
"node_id": "MDQ6VXNlcjgwODk4NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8089862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lamthuy",
"html_url": "https://github.com/lamthuy",
"followers_url": "https://api.github.com/users/lamthuy/... | [] | closed | false | null | [] | null | 2 | 2021-01-22T14:22:55 | 2021-02-02T10:38:06 | 2021-02-02T10:38:06 | NONE | null | null | null | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1766/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1766/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1765/comments | https://api.github.com/repos/huggingface/datasets/issues/1765/events | https://github.com/huggingface/datasets/issues/1765 | 791,553,065 | MDU6SXNzdWU3OTE1NTMwNjU= | 1,765 | Error iterating over Dataset with DataLoader | {
"login": "EvanZ",
"id": 1295082,
"node_id": "MDQ6VXNlcjEyOTUwODI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1295082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EvanZ",
"html_url": "https://github.com/EvanZ",
"followers_url": "https://api.github.com/users/EvanZ/follower... | [] | closed | false | null | [] | null | 5 | 2021-01-21T22:56:45 | 2021-12-07T12:22:33 | 2021-01-23T03:44:14 | NONE | null | null | null | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1765/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1764/comments | https://api.github.com/repos/huggingface/datasets/issues/1764/events | https://github.com/huggingface/datasets/issues/1764 | 791,486,860 | MDU6SXNzdWU3OTE0ODY4NjA= | 1,764 | Connection Issues | {
"login": "SaeedNajafi",
"id": 12455298,
"node_id": "MDQ6VXNlcjEyNDU1Mjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/12455298?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SaeedNajafi",
"html_url": "https://github.com/SaeedNajafi",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 1 | 2021-01-21T20:56:09 | 2021-01-21T21:00:19 | 2021-01-21T21:00:02 | NONE | null | null | null | Today, I am getting connection issues while loading a dataset and the metric.
```
Traceback (most recent call last):
File "src/train.py", line 180, in <module>
train_dataset, dev_dataset, test_dataset = create_race_dataset()
File "src/train.py", line 130, in create_race_dataset
train_dataset = load_da... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1764/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1764/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1763/comments | https://api.github.com/repos/huggingface/datasets/issues/1763/events | https://github.com/huggingface/datasets/pull/1763 | 791,389,763 | MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1 | 1,763 | PAWS-X: Fix csv Dictreader splitting data on quotes | {
"login": "gowtham1997",
"id": 9641196,
"node_id": "MDQ6VXNlcjk2NDExOTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/9641196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gowtham1997",
"html_url": "https://github.com/gowtham1997",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 0 | 2021-01-21T18:21:01 | 2021-01-22T10:14:33 | 2021-01-22T10:13:45 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1763",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"merged_at": "2021-01-22T10:13... |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1763/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1762 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1762/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1762/comments | https://api.github.com/repos/huggingface/datasets/issues/1762/events | https://github.com/huggingface/datasets/issues/1762 | 791,226,007 | MDU6SXNzdWU3OTEyMjYwMDc= | 1,762 | Unable to format dataset to CUDA Tensors | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 6 | 2021-01-21T15:31:23 | 2021-02-02T07:13:22 | 2021-02-02T07:13:22 | CONTRIBUTOR | null | null | null | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1762/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1762/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1761 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1761/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1761/comments | https://api.github.com/repos/huggingface/datasets/issues/1761/events | https://github.com/huggingface/datasets/pull/1761 | 791,150,858 | MDExOlB1bGxSZXF1ZXN0NTU5MjUyMzEw | 1,761 | Add SILICONE benchmark | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/follower... | [] | closed | false | null | [] | null | 8 | 2021-01-21T14:29:12 | 2021-02-04T14:32:48 | 2021-01-26T13:50:31 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1761",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"merged_at": "2021-01-26T13:50... | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1761/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1761/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1760/comments | https://api.github.com/repos/huggingface/datasets/issues/1760/events | https://github.com/huggingface/datasets/pull/1760 | 791,110,857 | MDExOlB1bGxSZXF1ZXN0NTU5MjE3MjY0 | 1,760 | More tags | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 2 | 2021-01-21T13:50:10 | 2021-01-22T09:40:01 | 2021-01-22T09:40:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1760",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"merged_at": "2021-01-22T09:40... | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1760/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1760/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1759/comments | https://api.github.com/repos/huggingface/datasets/issues/1759/events | https://github.com/huggingface/datasets/issues/1759 | 790,992,226 | MDU6SXNzdWU3OTA5OTIyMjY= | 1,759 | wikipedia dataset incomplete | {
"login": "ChrisDelClea",
"id": 19912393,
"node_id": "MDQ6VXNlcjE5OTEyMzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/19912393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ChrisDelClea",
"html_url": "https://github.com/ChrisDelClea",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 4 | 2021-01-21T11:47:15 | 2021-01-21T17:22:11 | 2021-01-21T17:21:06 | NONE | null | null | null | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1759/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1759/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1758/comments | https://api.github.com/repos/huggingface/datasets/issues/1758/events | https://github.com/huggingface/datasets/issues/1758 | 790,626,116 | MDU6SXNzdWU3OTA2MjYxMTY= | 1,758 | dataset.search() (elastic) cannot reliably retrieve search results | {
"login": "afogarty85",
"id": 49048309,
"node_id": "MDQ6VXNlcjQ5MDQ4MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afogarty85",
"html_url": "https://github.com/afogarty85",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2021-01-21T02:26:37 | 2021-01-22T00:25:50 | 2021-01-22T00:25:50 | NONE | null | null | null | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1758/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1757/comments | https://api.github.com/repos/huggingface/datasets/issues/1757/events | https://github.com/huggingface/datasets/issues/1757 | 790,466,509 | MDU6SXNzdWU3OTA0NjY1MDk= | 1,757 | FewRel | {
"login": "dspoka",
"id": 6183050,
"node_id": "MDQ6VXNlcjYxODMwNTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6183050?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dspoka",
"html_url": "https://github.com/dspoka",
"followers_url": "https://api.github.com/users/dspoka/foll... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | 5 | 2021-01-20T23:56:03 | 2021-03-09T02:52:05 | 2021-03-08T14:34:52 | NONE | null | null | null | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
auth... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1757/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1757/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1756/comments | https://api.github.com/repos/huggingface/datasets/issues/1756/events | https://github.com/huggingface/datasets/issues/1756 | 790,380,028 | MDU6SXNzdWU3OTAzODAwMjg= | 1,756 | Ccaligned multilingual translation dataset | {
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi0... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | 0 | 2021-01-20T22:18:44 | 2021-03-01T10:36:21 | 2021-03-01T10:36:21 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1756/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1756/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1755/comments | https://api.github.com/repos/huggingface/datasets/issues/1755/events | https://github.com/huggingface/datasets/issues/1755 | 790,324,734 | MDU6SXNzdWU3OTAzMjQ3MzQ= | 1,755 | Using select/reordering datasets slows operations down immensely | {
"login": "afogarty85",
"id": 49048309,
"node_id": "MDQ6VXNlcjQ5MDQ4MzA5",
"avatar_url": "https://avatars.githubusercontent.com/u/49048309?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/afogarty85",
"html_url": "https://github.com/afogarty85",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2021-01-20T21:12:12 | 2021-01-20T22:03:39 | 2021-01-20T22:03:39 | NONE | null | null | null | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below examp... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1755/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1754/comments | https://api.github.com/repos/huggingface/datasets/issues/1754/events | https://github.com/huggingface/datasets/pull/1754 | 789,881,730 | MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw | 1,754 | Use a config id in the cache directory names for custom configs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-20T11:11:00 | 2021-01-25T09:12:07 | 2021-01-25T09:12:06 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1754",
"html_url": "https://github.com/huggingface/datasets/pull/1754",
"diff_url": "https://github.com/huggingface/datasets/pull/1754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1754.patch",
"merged_at": "2021-01-25T09:12... | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1754/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1753/comments | https://api.github.com/repos/huggingface/datasets/issues/1753/events | https://github.com/huggingface/datasets/pull/1753 | 789,867,685 | MDExOlB1bGxSZXF1ZXN0NTU4MTQ3Njkx | 1,753 | fix comet citations | {
"login": "ricardorei",
"id": 17256847,
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardorei",
"html_url": "https://github.com/ricardorei",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2021-01-20T10:52:38 | 2021-01-20T14:39:30 | 2021-01-20T14:39:30 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1753",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"merged_at": "2021-01-20T14:39... | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1753/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1753/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1752 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1752/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1752/comments | https://api.github.com/repos/huggingface/datasets/issues/1752/events | https://github.com/huggingface/datasets/pull/1752 | 789,822,459 | MDExOlB1bGxSZXF1ZXN0NTU4MTA5NTA5 | 1,752 | COMET metric citation | {
"login": "ricardorei",
"id": 17256847,
"node_id": "MDQ6VXNlcjE3MjU2ODQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/17256847?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ricardorei",
"html_url": "https://github.com/ricardorei",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2021-01-20T09:54:43 | 2021-01-20T10:27:07 | 2021-01-20T10:25:02 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1752",
"html_url": "https://github.com/huggingface/datasets/pull/1752",
"diff_url": "https://github.com/huggingface/datasets/pull/1752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1752.patch",
"merged_at": null
} | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1752/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1752/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1751/comments | https://api.github.com/repos/huggingface/datasets/issues/1751/events | https://github.com/huggingface/datasets/pull/1751 | 789,232,980 | MDExOlB1bGxSZXF1ZXN0NTU3NjA1ODE2 | 1,751 | Updated README for the Social Bias Frames dataset | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | 0 | 2021-01-19T17:53:00 | 2021-01-20T14:56:52 | 2021-01-20T14:56:52 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1751",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"merged_at": "2021-01-20T14:56... | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1751/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1751/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1750/comments | https://api.github.com/repos/huggingface/datasets/issues/1750/events | https://github.com/huggingface/datasets/pull/1750 | 788,668,085 | MDExOlB1bGxSZXF1ZXN0NTU3MTM1MzM1 | 1,750 | Fix typo in README.md of cnn_dailymail | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2021-01-19T03:06:05 | 2021-01-19T11:07:29 | 2021-01-19T09:48:43 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1750",
"html_url": "https://github.com/huggingface/datasets/pull/1750",
"diff_url": "https://github.com/huggingface/datasets/pull/1750.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1750.patch",
"merged_at": "2021-01-19T09:48... | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1750/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1750/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1749/comments | https://api.github.com/repos/huggingface/datasets/issues/1749/events | https://github.com/huggingface/datasets/pull/1749 | 788,476,639 | MDExOlB1bGxSZXF1ZXN0NTU2OTgxMDc5 | 1,749 | Added metadata and correct splits for swda. | {
"login": "gmihaila",
"id": 22454783,
"node_id": "MDQ6VXNlcjIyNDU0Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/22454783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmihaila",
"html_url": "https://github.com/gmihaila",
"followers_url": "https://api.github.com/users/gmi... | [] | closed | false | null | [] | null | 2 | 2021-01-18T18:36:32 | 2021-01-29T19:35:52 | 2021-01-29T18:38:08 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1749",
"html_url": "https://github.com/huggingface/datasets/pull/1749",
"diff_url": "https://github.com/huggingface/datasets/pull/1749.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1749.patch",
"merged_at": "2021-01-29T18:38... | Switchboard Dialog Act Corpus
I made some changes following @bhavitvyamalik recommendation in #1678:
* Contains all metadata.
* Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo.
* Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1749/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1749/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1748/comments | https://api.github.com/repos/huggingface/datasets/issues/1748/events | https://github.com/huggingface/datasets/pull/1748 | 788,431,642 | MDExOlB1bGxSZXF1ZXN0NTU2OTQ0NDEx | 1,748 | add Stuctured Argument Extraction for Korean dataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/ste... | [] | closed | false | null | [] | null | 0 | 2021-01-18T17:14:19 | 2021-09-17T16:53:18 | 2021-01-19T11:26:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1748",
"html_url": "https://github.com/huggingface/datasets/pull/1748",
"diff_url": "https://github.com/huggingface/datasets/pull/1748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1748.patch",
"merged_at": "2021-01-19T11:26... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1748/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1748/timeline | null | |
https://api.github.com/repos/huggingface/datasets/issues/1747 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1747/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1747/comments | https://api.github.com/repos/huggingface/datasets/issues/1747/events | https://github.com/huggingface/datasets/issues/1747 | 788,299,775 | MDU6SXNzdWU3ODgyOTk3NzU= | 1,747 | datasets slicing with seed | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | open | false | null | [] | null | 2 | 2021-01-18T14:08:55 | 2021-01-18T14:45:34 | null | NONE | null | null | null | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1747/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1747/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1746/comments | https://api.github.com/repos/huggingface/datasets/issues/1746/events | https://github.com/huggingface/datasets/pull/1746 | 788,188,184 | MDExOlB1bGxSZXF1ZXN0NTU2NzQxMjIw | 1,746 | Fix release conda worflow | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-18T11:29:10 | 2021-01-18T11:31:24 | 2021-01-18T11:31:23 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1746",
"html_url": "https://github.com/huggingface/datasets/pull/1746",
"diff_url": "https://github.com/huggingface/datasets/pull/1746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1746.patch",
"merged_at": "2021-01-18T11:31... | The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1746/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1746/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1745/comments | https://api.github.com/repos/huggingface/datasets/issues/1745/events | https://github.com/huggingface/datasets/issues/1745 | 787,838,256 | MDU6SXNzdWU3ODc4MzgyNTY= | 1,745 | difference between wsc and wsc.fixed for superglue | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | closed | false | null | [] | null | 1 | 2021-01-18T00:50:19 | 2021-01-18T11:02:43 | 2021-01-18T00:59:34 | NONE | null | null | null | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1745/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1745/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1744/comments | https://api.github.com/repos/huggingface/datasets/issues/1744/events | https://github.com/huggingface/datasets/pull/1744 | 787,649,811 | MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4 | 1,744 | Add missing "brief" entries to reuters | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/foll... | [] | closed | false | null | [] | null | 2 | 2021-01-17T07:58:49 | 2021-01-18T11:26:09 | 2021-01-18T11:26:09 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1744",
"html_url": "https://github.com/huggingface/datasets/pull/1744",
"diff_url": "https://github.com/huggingface/datasets/pull/1744.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1744.patch",
"merged_at": "2021-01-18T11:26... | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1744/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1743/comments | https://api.github.com/repos/huggingface/datasets/issues/1743/events | https://github.com/huggingface/datasets/issues/1743 | 787,631,412 | MDU6SXNzdWU3ODc2MzE0MTI= | 1,743 | Issue while Creating Custom Metric | {
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 2 | 2021-01-17T07:01:14 | 2021-01-22T16:45:00 | null | CONTRIBUTOR | null | null | null | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1743/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1743/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1742/comments | https://api.github.com/repos/huggingface/datasets/issues/1742/events | https://github.com/huggingface/datasets/pull/1742 | 787,623,640 | MDExOlB1bGxSZXF1ZXN0NTU2MjgyMDYw | 1,742 | Add GLUE Compat (compatible with transformers<3.5.0) | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2021-01-17T05:54:25 | 2021-03-29T12:43:30 | 2021-03-29T12:43:30 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1742",
"html_url": "https://github.com/huggingface/datasets/pull/1742",
"diff_url": "https://github.com/huggingface/datasets/pull/1742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1742.patch",
"merged_at": null
} | Link to our discussion on Slack (HF internal)
https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400
The next step is to add a compatible option in the new `run_glue.py`
I duplicated `glue` and made the following changes:
1. Change the name to `glue_compat`.
2. Change the label assignments for MN... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1742/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1742/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1741/comments | https://api.github.com/repos/huggingface/datasets/issues/1741/events | https://github.com/huggingface/datasets/issues/1741 | 787,327,060 | MDU6SXNzdWU3ODczMjcwNjA= | 1,741 | error when run fine_tuning on text_classification | {
"login": "XiaoYang66",
"id": 43234824,
"node_id": "MDQ6VXNlcjQzMjM0ODI0",
"avatar_url": "https://avatars.githubusercontent.com/u/43234824?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/XiaoYang66",
"html_url": "https://github.com/XiaoYang66",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2021-01-16T02:23:19 | 2021-01-16T02:39:28 | 2021-01-16T02:39:18 | NONE | null | null | null | dataset:sem_eval_2014_task_1
pretrained_model:bert-base-uncased
error description:
when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1741/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1741/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1740 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1740/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1740/comments | https://api.github.com/repos/huggingface/datasets/issues/1740/events | https://github.com/huggingface/datasets/pull/1740 | 787,264,605 | MDExOlB1bGxSZXF1ZXN0NTU2MDA5NjM1 | 1,740 | add id_liputan6 dataset | {
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | 0 | 2021-01-15T22:58:34 | 2021-01-20T13:41:26 | 2021-01-20T13:41:26 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1740",
"html_url": "https://github.com/huggingface/datasets/pull/1740",
"diff_url": "https://github.com/huggingface/datasets/pull/1740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1740.patch",
"merged_at": "2021-01-20T13:41... | id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1740/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1740/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1739/comments | https://api.github.com/repos/huggingface/datasets/issues/1739/events | https://github.com/huggingface/datasets/pull/1739 | 787,219,138 | MDExOlB1bGxSZXF1ZXN0NTU1OTY5Njgx | 1,739 | fixes and improvements for the WebNLG loader | {
"login": "Shimorina",
"id": 9607332,
"node_id": "MDQ6VXNlcjk2MDczMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9607332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shimorina",
"html_url": "https://github.com/Shimorina",
"followers_url": "https://api.github.com/users/Sh... | [] | closed | false | null | [] | null | 5 | 2021-01-15T21:45:23 | 2021-01-29T14:34:06 | 2021-01-29T10:53:03 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1739",
"html_url": "https://github.com/huggingface/datasets/pull/1739",
"diff_url": "https://github.com/huggingface/datasets/pull/1739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1739.patch",
"merged_at": "2021-01-29T10:53... | - fixes test sets loading in v3.0
- adds additional fields for v3.0_ru
- adds info to the WebNLG data card | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1739/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1739/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1738/comments | https://api.github.com/repos/huggingface/datasets/issues/1738/events | https://github.com/huggingface/datasets/pull/1738 | 786,068,440 | MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4 | 1,738 | Conda support | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2021-01-14T15:11:25 | 2021-01-15T10:08:20 | 2021-01-15T10:08:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1738",
"html_url": "https://github.com/huggingface/datasets/pull/1738",
"diff_url": "https://github.com/huggingface/datasets/pull/1738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1738.patch",
"merged_at": "2021-01-15T10:08... | Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).
Will appear here: https://anaconda.org/huggingface/datasets
Depends on `conda-forge` for now, so the following is required for installation:
```
conda install -c huggingface -c conda-forge datasets
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions",
"total_count": 4,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 4,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1738/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1737/comments | https://api.github.com/repos/huggingface/datasets/issues/1737/events | https://github.com/huggingface/datasets/pull/1737 | 785,606,286 | MDExOlB1bGxSZXF1ZXN0NTU0NjA2ODg5 | 1,737 | update link in TLC to be github links | {
"login": "chameleonTK",
"id": 6429850,
"node_id": "MDQ6VXNlcjY0Mjk4NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6429850?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chameleonTK",
"html_url": "https://github.com/chameleonTK",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 1 | 2021-01-14T02:49:21 | 2021-01-14T10:25:24 | 2021-01-14T10:25:24 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1737",
"html_url": "https://github.com/huggingface/datasets/pull/1737",
"diff_url": "https://github.com/huggingface/datasets/pull/1737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1737.patch",
"merged_at": "2021-01-14T10:25... | Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1737/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1737/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1736/comments | https://api.github.com/repos/huggingface/datasets/issues/1736/events | https://github.com/huggingface/datasets/pull/1736 | 785,433,854 | MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw | 1,736 | Adjust BrWaC dataset features name | {
"login": "jonatasgrosman",
"id": 5097052,
"node_id": "MDQ6VXNlcjUwOTcwNTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5097052?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonatasgrosman",
"html_url": "https://github.com/jonatasgrosman",
"followers_url": "https://api.gith... | [] | closed | false | null | [] | null | 0 | 2021-01-13T20:39:04 | 2021-01-14T10:29:38 | 2021-01-14T10:29:38 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1736",
"html_url": "https://github.com/huggingface/datasets/pull/1736",
"diff_url": "https://github.com/huggingface/datasets/pull/1736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1736.patch",
"merged_at": "2021-01-14T10:29... | I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.
Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1736/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1735 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1735/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1735/comments | https://api.github.com/repos/huggingface/datasets/issues/1735/events | https://github.com/huggingface/datasets/pull/1735 | 785,184,740 | MDExOlB1bGxSZXF1ZXN0NTU0MjUzMDcw | 1,735 | Update add new dataset template | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugge... | [] | closed | false | null | [] | null | 2 | 2021-01-13T15:08:09 | 2021-01-14T15:16:01 | 2021-01-14T15:16:00 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"merged_at": "2021-01-14T15:16... | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1735/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1735/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1734/comments | https://api.github.com/repos/huggingface/datasets/issues/1734/events | https://github.com/huggingface/datasets/pull/1734 | 784,956,707 | MDExOlB1bGxSZXF1ZXN0NTU0MDYxMzMz | 1,734 | Fix empty token bug for `thainer` and `lst20` | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 0 | 2021-01-13T09:55:09 | 2021-01-14T10:42:18 | 2021-01-14T10:42:18 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1734",
"html_url": "https://github.com/huggingface/datasets/pull/1734",
"diff_url": "https://github.com/huggingface/datasets/pull/1734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1734.patch",
"merged_at": "2021-01-14T10:42... | add a condition to check if tokens exist before yielding in `thainer` and `lst20` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1734/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1734/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1733/comments | https://api.github.com/repos/huggingface/datasets/issues/1733/events | https://github.com/huggingface/datasets/issues/1733 | 784,903,002 | MDU6SXNzdWU3ODQ5MDMwMDI= | 1,733 | connection issue with glue, what is the data url for glue? | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"f... | [] | closed | false | null | [] | null | 1 | 2021-01-13T08:37:40 | 2021-08-04T18:13:55 | 2021-08-04T18:13:55 | NONE | null | null | null | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1733/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1733/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1732/comments | https://api.github.com/repos/huggingface/datasets/issues/1732/events | https://github.com/huggingface/datasets/pull/1732 | 784,874,490 | MDExOlB1bGxSZXF1ZXN0NTUzOTkzNTAx | 1,732 | [GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification. | {
"login": "mounicam",
"id": 11708999,
"node_id": "MDQ6VXNlcjExNzA4OTk5",
"avatar_url": "https://avatars.githubusercontent.com/u/11708999?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mounicam",
"html_url": "https://github.com/mounicam",
"followers_url": "https://api.github.com/users/mou... | [] | closed | false | null | [] | null | 1 | 2021-01-13T07:50:19 | 2021-01-14T10:19:41 | 2021-01-14T10:19:41 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1732",
"html_url": "https://github.com/huggingface/datasets/pull/1732",
"diff_url": "https://github.com/huggingface/datasets/pull/1732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1732.patch",
"merged_at": "2021-01-14T10:19... | We want to use TurkCorpus for validation and testing of the sentence simplification task. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1732/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1732/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1731/comments | https://api.github.com/repos/huggingface/datasets/issues/1731/events | https://github.com/huggingface/datasets/issues/1731 | 784,744,674 | MDU6SXNzdWU3ODQ3NDQ2NzQ= | 1,731 | Couldn't reach swda.py | {
"login": "yangp725",
"id": 13365326,
"node_id": "MDQ6VXNlcjEzMzY1MzI2",
"avatar_url": "https://avatars.githubusercontent.com/u/13365326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yangp725",
"html_url": "https://github.com/yangp725",
"followers_url": "https://api.github.com/users/yan... | [] | closed | false | null | [] | null | 2 | 2021-01-13T02:57:40 | 2021-01-13T11:17:40 | 2021-01-13T11:17:40 | NONE | null | null | null | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1731/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1730/comments | https://api.github.com/repos/huggingface/datasets/issues/1730/events | https://github.com/huggingface/datasets/pull/1730 | 784,617,525 | MDExOlB1bGxSZXF1ZXN0NTUzNzgxMDY0 | 1,730 | Add MNIST dataset | {
"login": "sgugger",
"id": 35901082,
"node_id": "MDQ6VXNlcjM1OTAxMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sgugger",
"html_url": "https://github.com/sgugger",
"followers_url": "https://api.github.com/users/sgugge... | [] | closed | false | null | [] | null | 0 | 2021-01-12T21:48:02 | 2021-01-13T10:19:47 | 2021-01-13T10:19:46 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1730",
"html_url": "https://github.com/huggingface/datasets/pull/1730",
"diff_url": "https://github.com/huggingface/datasets/pull/1730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1730.patch",
"merged_at": "2021-01-13T10:19... | This PR adds the MNIST dataset to the library. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1730/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1730/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1729/comments | https://api.github.com/repos/huggingface/datasets/issues/1729/events | https://github.com/huggingface/datasets/issues/1729 | 784,565,898 | MDU6SXNzdWU3ODQ1NjU4OTg= | 1,729 | Is there support for Deep learning datasets? | {
"login": "pablodz",
"id": 28235457,
"node_id": "MDQ6VXNlcjI4MjM1NDU3",
"avatar_url": "https://avatars.githubusercontent.com/u/28235457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pablodz",
"html_url": "https://github.com/pablodz",
"followers_url": "https://api.github.com/users/pablod... | [] | closed | false | null | [] | null | 1 | 2021-01-12T20:22:41 | 2021-03-31T04:24:07 | 2021-03-31T04:24:07 | NONE | null | null | null | I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1729/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1729/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1728/comments | https://api.github.com/repos/huggingface/datasets/issues/1728/events | https://github.com/huggingface/datasets/issues/1728 | 784,458,342 | MDU6SXNzdWU3ODQ0NTgzNDI= | 1,728 | Add an entry to an arrow dataset | {
"login": "ameet-1997",
"id": 18645407,
"node_id": "MDQ6VXNlcjE4NjQ1NDA3",
"avatar_url": "https://avatars.githubusercontent.com/u/18645407?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ameet-1997",
"html_url": "https://github.com/ameet-1997",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2021-01-12T18:01:47 | 2021-01-18T19:15:32 | 2021-01-18T19:15:32 | NONE | null | null | null | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1728/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1728/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1727/comments | https://api.github.com/repos/huggingface/datasets/issues/1727/events | https://github.com/huggingface/datasets/issues/1727 | 784,435,131 | MDU6SXNzdWU3ODQ0MzUxMzE= | 1,727 | BLEURT score calculation raises UnrecognizedFlagError | {
"login": "nadavo",
"id": 6603920,
"node_id": "MDQ6VXNlcjY2MDM5MjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6603920?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nadavo",
"html_url": "https://github.com/nadavo",
"followers_url": "https://api.github.com/users/nadavo/foll... | [] | open | false | null | [] | null | 9 | 2021-01-12T17:27:02 | 2021-04-12T22:21:41 | null | NONE | null | null | null | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1727/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1727/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1726 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1726/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1726/comments | https://api.github.com/repos/huggingface/datasets/issues/1726/events | https://github.com/huggingface/datasets/pull/1726 | 784,336,370 | MDExOlB1bGxSZXF1ZXN0NTUzNTQ0ODg4 | 1,726 | Offline loading | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 6 | 2021-01-12T15:21:57 | 2021-01-28T18:05:22 | 2021-01-19T16:42:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1726",
"html_url": "https://github.com/huggingface/datasets/pull/1726",
"diff_url": "https://github.com/huggingface/datasets/pull/1726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1726.patch",
"merged_at": "2021-01-19T16:42... | As discussed in #824 it would be cool to make the library work in offline mode.
Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.
This is because `prepare_module` fetches online for the latest vers... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1726/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 1,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1726/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1725/comments | https://api.github.com/repos/huggingface/datasets/issues/1725/events | https://github.com/huggingface/datasets/issues/1725 | 784,182,273 | MDU6SXNzdWU3ODQxODIyNzM= | 1,725 | load the local dataset | {
"login": "xinjicong",
"id": 41193842,
"node_id": "MDQ6VXNlcjQxMTkzODQy",
"avatar_url": "https://avatars.githubusercontent.com/u/41193842?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xinjicong",
"html_url": "https://github.com/xinjicong",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 6 | 2021-01-12T12:12:55 | 2021-03-03T10:55:43 | null | NONE | null | null | null | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1725/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1725/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1723/comments | https://api.github.com/repos/huggingface/datasets/issues/1723/events | https://github.com/huggingface/datasets/pull/1723 | 783,982,100 | MDExOlB1bGxSZXF1ZXN0NTUzMjQ4MzU1 | 1,723 | ADD S3 support for downloading and uploading processed datasets | {
"login": "philschmid",
"id": 32632186,
"node_id": "MDQ6VXNlcjMyNjMyMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/32632186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/philschmid",
"html_url": "https://github.com/philschmid",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2021-01-12T07:17:34 | 2021-01-26T17:02:08 | 2021-01-26T17:02:08 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1723",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"merged_at": "2021-01-26T17:02... | # What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Lo... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1723/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 3,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1723/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1724/comments | https://api.github.com/repos/huggingface/datasets/issues/1724/events | https://github.com/huggingface/datasets/issues/1724 | 784,023,338 | MDU6SXNzdWU3ODQwMjMzMzg= | 1,724 | could not run models on a offline server successfully | {
"login": "lkcao",
"id": 49967236,
"node_id": "MDQ6VXNlcjQ5OTY3MjM2",
"avatar_url": "https://avatars.githubusercontent.com/u/49967236?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lkcao",
"html_url": "https://github.com/lkcao",
"followers_url": "https://api.github.com/users/lkcao/follow... | [] | open | false | null | [] | null | 6 | 2021-01-12T06:08:06 | 2021-03-03T15:32:29 | null | NONE | null | null | null | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
, the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1720/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1720/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1719/comments | https://api.github.com/repos/huggingface/datasets/issues/1719/events | https://github.com/huggingface/datasets/pull/1719 | 783,557,542 | MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4 | 1,719 | Fix column list comparison in transmit format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-11T17:23:56 | 2021-01-11T18:45:03 | 2021-01-11T18:45:02 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1719",
"html_url": "https://github.com/huggingface/datasets/pull/1719",
"diff_url": "https://github.com/huggingface/datasets/pull/1719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1719.patch",
"merged_at": "2021-01-11T18:45... | As noticed in #1718 the cache might not reload the cache files when new columns were added.
This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1719/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1719/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1718/comments | https://api.github.com/repos/huggingface/datasets/issues/1718/events | https://github.com/huggingface/datasets/issues/1718 | 783,474,753 | MDU6SXNzdWU3ODM0NzQ3NTM= | 1,718 | Possible cache miss in datasets | {
"login": "ofirzaf",
"id": 18296312,
"node_id": "MDQ6VXNlcjE4Mjk2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/18296312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ofirzaf",
"html_url": "https://github.com/ofirzaf",
"followers_url": "https://api.github.com/users/ofirza... | [] | closed | false | null | [] | null | 14 | 2021-01-11T15:37:31 | 2021-04-28T06:35:23 | 2021-01-26T02:47:59 | NONE | null | null | null | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1718/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1718/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1717/comments | https://api.github.com/repos/huggingface/datasets/issues/1717/events | https://github.com/huggingface/datasets/issues/1717 | 783,074,255 | MDU6SXNzdWU3ODMwNzQyNTU= | 1,717 | SciFact dataset - minor changes | {
"login": "dwadden",
"id": 3091916,
"node_id": "MDQ6VXNlcjMwOTE5MTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3091916?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwadden",
"html_url": "https://github.com/dwadden",
"followers_url": "https://api.github.com/users/dwadden/... | [] | closed | false | null | [] | null | 4 | 2021-01-11T05:26:40 | 2021-01-26T02:52:17 | 2021-01-26T02:52:17 | CONTRIBUTOR | null | null | null | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1717/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1717/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1716/comments | https://api.github.com/repos/huggingface/datasets/issues/1716/events | https://github.com/huggingface/datasets/pull/1716 | 782,819,006 | MDExOlB1bGxSZXF1ZXN0NTUyMjgzNzE5 | 1,716 | Add Hatexplain Dataset | {
"login": "kushal2000",
"id": 48222101,
"node_id": "MDQ6VXNlcjQ4MjIyMTAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48222101?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kushal2000",
"html_url": "https://github.com/kushal2000",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2021-01-10T13:30:01 | 2021-01-18T14:21:42 | 2021-01-18T14:21:42 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1716",
"html_url": "https://github.com/huggingface/datasets/pull/1716",
"diff_url": "https://github.com/huggingface/datasets/pull/1716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1716.patch",
"merged_at": "2021-01-18T14:21... | Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1716/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1716/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1715/comments | https://api.github.com/repos/huggingface/datasets/issues/1715/events | https://github.com/huggingface/datasets/pull/1715 | 782,754,441 | MDExOlB1bGxSZXF1ZXN0NTUyMjM2NDA5 | 1,715 | add Korean intonation-aided intention identification dataset | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/ste... | [] | closed | false | null | [] | null | 0 | 2021-01-10T06:29:04 | 2021-09-17T16:54:13 | 2021-01-12T17:14:33 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1715",
"html_url": "https://github.com/huggingface/datasets/pull/1715",
"diff_url": "https://github.com/huggingface/datasets/pull/1715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1715.patch",
"merged_at": "2021-01-12T17:14... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1715/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1715/timeline | null | |
https://api.github.com/repos/huggingface/datasets/issues/1714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1714/comments | https://api.github.com/repos/huggingface/datasets/issues/1714/events | https://github.com/huggingface/datasets/pull/1714 | 782,416,276 | MDExOlB1bGxSZXF1ZXN0NTUxOTc3MDA0 | 1,714 | Adding adversarialQA dataset | {
"login": "maxbartolo",
"id": 15869827,
"node_id": "MDQ6VXNlcjE1ODY5ODI3",
"avatar_url": "https://avatars.githubusercontent.com/u/15869827?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxbartolo",
"html_url": "https://github.com/maxbartolo",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2021-01-08T21:46:09 | 2021-01-13T16:05:24 | 2021-01-13T16:05:24 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1714",
"html_url": "https://github.com/huggingface/datasets/pull/1714",
"diff_url": "https://github.com/huggingface/datasets/pull/1714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1714.patch",
"merged_at": "2021-01-13T16:05... | Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1714/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1714/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1713/comments | https://api.github.com/repos/huggingface/datasets/issues/1713/events | https://github.com/huggingface/datasets/issues/1713 | 782,337,723 | MDU6SXNzdWU3ODIzMzc3MjM= | 1,713 | Installation using conda | {
"login": "pranav-s",
"id": 9393002,
"node_id": "MDQ6VXNlcjkzOTMwMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9393002?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranav-s",
"html_url": "https://github.com/pranav-s",
"followers_url": "https://api.github.com/users/prana... | [] | closed | false | null | [] | null | 5 | 2021-01-08T19:12:15 | 2021-09-17T12:47:40 | 2021-09-17T12:47:40 | NONE | null | null | null | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1713/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1713/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1712/comments | https://api.github.com/repos/huggingface/datasets/issues/1712/events | https://github.com/huggingface/datasets/pull/1712 | 782,313,097 | MDExOlB1bGxSZXF1ZXN0NTUxODkxMDk4 | 1,712 | Silicone | {
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/follower... | [] | closed | false | null | [] | null | 6 | 2021-01-08T18:24:18 | 2021-01-21T14:12:37 | 2021-01-21T10:31:11 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1712",
"html_url": "https://github.com/huggingface/datasets/pull/1712",
"diff_url": "https://github.com/huggingface/datasets/pull/1712.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1712.patch",
"merged_at": null
} | My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1712/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/1712/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1711 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1711/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1711/comments | https://api.github.com/repos/huggingface/datasets/issues/1711/events | https://github.com/huggingface/datasets/pull/1711 | 782,129,083 | MDExOlB1bGxSZXF1ZXN0NTUxNzQxODA2 | 1,711 | Fix windows path scheme in cached path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-08T13:45:56 | 2021-01-11T09:23:20 | 2021-01-11T09:23:19 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"merged_at": "2021-01-11T09:23... | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1711/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1711/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1710/comments | https://api.github.com/repos/huggingface/datasets/issues/1710/events | https://github.com/huggingface/datasets/issues/1710 | 781,914,951 | MDU6SXNzdWU3ODE5MTQ5NTE= | 1,710 | IsADirectoryError when trying to download C4 | {
"login": "fredriko",
"id": 5771366,
"node_id": "MDQ6VXNlcjU3NzEzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5771366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredriko",
"html_url": "https://github.com/fredriko",
"followers_url": "https://api.github.com/users/fredr... | [] | open | false | null | [] | null | 1 | 2021-01-08T07:31:30 | 2021-01-13T09:44:13 | null | NONE | null | null | null | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When runn... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1710/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1710/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1709/comments | https://api.github.com/repos/huggingface/datasets/issues/1709/events | https://github.com/huggingface/datasets/issues/1709 | 781,875,640 | MDU6SXNzdWU3ODE4NzU2NDA= | 1,709 | Databases | {
"login": "JimmyJim1",
"id": 68724553,
"node_id": "MDQ6VXNlcjY4NzI0NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JimmyJim1",
"html_url": "https://github.com/JimmyJim1",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 0 | 2021-01-08T06:14:03 | 2021-01-08T09:00:08 | 2021-01-08T09:00:08 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1709/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1709/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1708/comments | https://api.github.com/repos/huggingface/datasets/issues/1708/events | https://github.com/huggingface/datasets/issues/1708 | 781,631,455 | MDU6SXNzdWU3ODE2MzE0NTU= | 1,708 | <html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | {
"login": "Louiejay54",
"id": 77126849,
"node_id": "MDQ6VXNlcjc3MTI2ODQ5",
"avatar_url": "https://avatars.githubusercontent.com/u/77126849?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Louiejay54",
"html_url": "https://github.com/Louiejay54",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2021-01-07T21:45:24 | 2021-01-08T09:00:01 | 2021-01-08T09:00:01 | NONE | null | null | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1708/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1707/comments | https://api.github.com/repos/huggingface/datasets/issues/1707/events | https://github.com/huggingface/datasets/pull/1707 | 781,507,545 | MDExOlB1bGxSZXF1ZXN0NTUxMjE5MDk2 | 1,707 | Added generated READMEs for datasets that were missing one. | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/follow... | [] | closed | false | null | [] | null | 1 | 2021-01-07T18:10:06 | 2021-01-18T14:32:33 | 2021-01-18T14:32:33 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1707",
"html_url": "https://github.com/huggingface/datasets/pull/1707",
"diff_url": "https://github.com/huggingface/datasets/pull/1707.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1707.patch",
"merged_at": "2021-01-18T14:32... | This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible.
Code is available here for the moment: https://github.com/madlag/datasets... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1707/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1707/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1706/comments | https://api.github.com/repos/huggingface/datasets/issues/1706/events | https://github.com/huggingface/datasets/issues/1706 | 781,494,476 | MDU6SXNzdWU3ODE0OTQ0NzY= | 1,706 | Error when downloading a large dataset on slow connection. | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 1 | 2021-01-07T17:48:15 | 2021-01-13T10:35:02 | null | CONTRIBUTOR | null | null | null | I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1706/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1706/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1705/comments | https://api.github.com/repos/huggingface/datasets/issues/1705/events | https://github.com/huggingface/datasets/pull/1705 | 781,474,949 | MDExOlB1bGxSZXF1ZXN0NTUxMTkyMTc4 | 1,705 | Add information about caching and verifications in "Load a Dataset" docs | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] | closed | false | null | [] | null | 0 | 2021-01-07T17:18:44 | 2021-01-12T14:08:01 | 2021-01-12T14:08:01 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1705",
"html_url": "https://github.com/huggingface/datasets/pull/1705",
"diff_url": "https://github.com/huggingface/datasets/pull/1705.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1705.patch",
"merged_at": "2021-01-12T14:08... | Related to #215.
Missing improvements from @lhoestq's #1703. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1705/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1705/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1704/comments | https://api.github.com/repos/huggingface/datasets/issues/1704/events | https://github.com/huggingface/datasets/pull/1704 | 781,402,757 | MDExOlB1bGxSZXF1ZXN0NTUxMTMyNDI1 | 1,704 | Update XSUM Factuality DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2021-01-07T15:37:14 | 2021-01-12T13:30:04 | 2021-01-12T13:30:04 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1704",
"html_url": "https://github.com/huggingface/datasets/pull/1704",
"diff_url": "https://github.com/huggingface/datasets/pull/1704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1704.patch",
"merged_at": "2021-01-12T13:30... | Update XSUM Factuality DatasetCard | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1704/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1704/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1703/comments | https://api.github.com/repos/huggingface/datasets/issues/1703/events | https://github.com/huggingface/datasets/pull/1703 | 781,395,146 | MDExOlB1bGxSZXF1ZXN0NTUxMTI2MjA5 | 1,703 | Improvements regarding caching and fingerprinting | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 8 | 2021-01-07T15:26:29 | 2021-01-19T17:32:11 | 2021-01-19T17:32:10 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1703",
"html_url": "https://github.com/huggingface/datasets/pull/1703",
"diff_url": "https://github.com/huggingface/datasets/pull/1703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1703.patch",
"merged_at": "2021-01-19T17:32... | This PR adds these features:
- Enable/disable caching
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
It is equivalent to setting `load_from_cache` to `False` in dataset transforms.
```python
from datasets import set_caching_enabled
set_cach... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1703/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1703/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1702/comments | https://api.github.com/repos/huggingface/datasets/issues/1702/events | https://github.com/huggingface/datasets/pull/1702 | 781,383,277 | MDExOlB1bGxSZXF1ZXN0NTUxMTE2NDc0 | 1,702 | Fix importlib metdata import in py38 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-07T15:10:30 | 2021-01-08T10:47:15 | 2021-01-08T10:47:15 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1702",
"html_url": "https://github.com/huggingface/datasets/pull/1702",
"diff_url": "https://github.com/huggingface/datasets/pull/1702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1702.patch",
"merged_at": "2021-01-08T10:47... | In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1702/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1702/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1701/comments | https://api.github.com/repos/huggingface/datasets/issues/1701/events | https://github.com/huggingface/datasets/issues/1701 | 781,345,717 | MDU6SXNzdWU3ODEzNDU3MTc= | 1,701 | Some datasets miss dataset_infos.json or dummy_data.zip | {
"login": "madlag",
"id": 272253,
"node_id": "MDQ6VXNlcjI3MjI1Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/272253?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/madlag",
"html_url": "https://github.com/madlag",
"followers_url": "https://api.github.com/users/madlag/follow... | [] | open | false | null | [] | null | 1 | 2021-01-07T14:17:13 | 2021-01-12T13:40:46 | null | CONTRIBUTOR | null | null | null | While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_nli
math_dataset
mlqa
ms_marco
newsgroup
qa4mre
qanga... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1701/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1701/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1700/comments | https://api.github.com/repos/huggingface/datasets/issues/1700/events | https://github.com/huggingface/datasets/pull/1700 | 781,333,589 | MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2 | 1,700 | Update Curiosity dialogs DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2021-01-07T13:59:27 | 2021-01-12T18:51:32 | 2021-01-12T18:51:32 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1700",
"html_url": "https://github.com/huggingface/datasets/pull/1700",
"diff_url": "https://github.com/huggingface/datasets/pull/1700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1700.patch",
"merged_at": "2021-01-12T18:51... | Update Curiosity dialogs DatasetCard
There are some entries in the data fields section yet to be filled. There is little information regarding those fields. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1700/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1700/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1699/comments | https://api.github.com/repos/huggingface/datasets/issues/1699/events | https://github.com/huggingface/datasets/pull/1699 | 781,271,558 | MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5 | 1,699 | Update DBRD dataset card and download URL | {
"login": "benjaminvdb",
"id": 8875786,
"node_id": "MDQ6VXNlcjg4NzU3ODY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benjaminvdb",
"html_url": "https://github.com/benjaminvdb",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | 1 | 2021-01-07T12:16:43 | 2021-01-07T13:41:39 | 2021-01-07T13:40:59 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1699",
"html_url": "https://github.com/huggingface/datasets/pull/1699",
"diff_url": "https://github.com/huggingface/datasets/pull/1699.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1699.patch",
"merged_at": "2021-01-07T13:40... | I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes:
1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316.
2. I've updated the dataset card.
Cheers! 😄 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1699/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1699/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1698/comments | https://api.github.com/repos/huggingface/datasets/issues/1698/events | https://github.com/huggingface/datasets/pull/1698 | 781,152,561 | MDExOlB1bGxSZXF1ZXN0NTUwOTI0ODQ3 | 1,698 | Update Coached Conv Pref DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2021-01-07T09:07:16 | 2021-01-08T17:04:33 | 2021-01-08T17:04:32 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1698",
"html_url": "https://github.com/huggingface/datasets/pull/1698",
"diff_url": "https://github.com/huggingface/datasets/pull/1698.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1698.patch",
"merged_at": "2021-01-08T17:04... | Update Coached Conversation Preferance DatasetCard | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1698/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1698/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1697/comments | https://api.github.com/repos/huggingface/datasets/issues/1697/events | https://github.com/huggingface/datasets/pull/1697 | 781,126,579 | MDExOlB1bGxSZXF1ZXN0NTUwOTAzNzI5 | 1,697 | Update DialogRE DatasetCard | {
"login": "vineeths96",
"id": 50873201,
"node_id": "MDQ6VXNlcjUwODczMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/50873201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vineeths96",
"html_url": "https://github.com/vineeths96",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2021-01-07T08:22:33 | 2021-01-07T13:34:28 | 2021-01-07T13:34:28 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1697",
"html_url": "https://github.com/huggingface/datasets/pull/1697",
"diff_url": "https://github.com/huggingface/datasets/pull/1697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1697.patch",
"merged_at": "2021-01-07T13:34... | Update the information in the dataset card for the Dialog RE dataset. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1697/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1697/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1696/comments | https://api.github.com/repos/huggingface/datasets/issues/1696/events | https://github.com/huggingface/datasets/issues/1696 | 781,096,918 | MDU6SXNzdWU3ODEwOTY5MTg= | 1,696 | Unable to install datasets | {
"login": "glee2429",
"id": 12635475,
"node_id": "MDQ6VXNlcjEyNjM1NDc1",
"avatar_url": "https://avatars.githubusercontent.com/u/12635475?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/glee2429",
"html_url": "https://github.com/glee2429",
"followers_url": "https://api.github.com/users/gle... | [] | closed | false | null | [] | null | 4 | 2021-01-07T07:24:37 | 2021-01-08T00:33:05 | 2021-01-07T22:06:05 | NONE | null | null | null | ** Edit **
I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight!
**Short description**
I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1696/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1696/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1695/comments | https://api.github.com/repos/huggingface/datasets/issues/1695/events | https://github.com/huggingface/datasets/pull/1695 | 780,971,987 | MDExOlB1bGxSZXF1ZXN0NTUwNzc1OTU4 | 1,695 | fix ner_tag bugs in thainer | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2021-01-07T02:12:33 | 2021-01-07T14:43:45 | 2021-01-07T14:43:28 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1695",
"html_url": "https://github.com/huggingface/datasets/pull/1695",
"diff_url": "https://github.com/huggingface/datasets/pull/1695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1695.patch",
"merged_at": "2021-01-07T14:43... | fix bug that results in `ner_tag` always equal to 'O'. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1695/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1695/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1694/comments | https://api.github.com/repos/huggingface/datasets/issues/1694/events | https://github.com/huggingface/datasets/pull/1694 | 780,429,080 | MDExOlB1bGxSZXF1ZXN0NTUwMzI0Mjcx | 1,694 | Add OSCAR | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 10 | 2021-01-06T10:21:08 | 2021-01-25T09:10:33 | 2021-01-25T09:10:32 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1694",
"html_url": "https://github.com/huggingface/datasets/pull/1694",
"diff_url": "https://github.com/huggingface/datasets/pull/1694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1694.patch",
"merged_at": "2021-01-25T09:10... | Continuation of #348
The files have been moved to S3 and only the unshuffled version is available.
Both original and deduplicated versions of each language are available.
Example of usage:
```python
from datasets import load_dataset
oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1694/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 2,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1694/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1693/comments | https://api.github.com/repos/huggingface/datasets/issues/1693/events | https://github.com/huggingface/datasets/pull/1693 | 780,268,595 | MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx | 1,693 | Fix reuters metadata parsing errors | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/foll... | [] | closed | false | null | [] | null | 0 | 2021-01-06T08:26:03 | 2021-01-07T23:53:47 | 2021-01-07T14:01:22 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1693",
"html_url": "https://github.com/huggingface/datasets/pull/1693",
"diff_url": "https://github.com/huggingface/datasets/pull/1693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1693.patch",
"merged_at": "2021-01-07T14:01... | Was missing the last entry in each metadata category | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1693/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1693/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1691/comments | https://api.github.com/repos/huggingface/datasets/issues/1691/events | https://github.com/huggingface/datasets/pull/1691 | 779,882,271 | MDExOlB1bGxSZXF1ZXN0NTQ5ODE3NTM0 | 1,691 | Updated HuggingFace Datasets README (fix typos) | {
"login": "8bitmp3",
"id": 19637339,
"node_id": "MDQ6VXNlcjE5NjM3MzM5",
"avatar_url": "https://avatars.githubusercontent.com/u/19637339?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/8bitmp3",
"html_url": "https://github.com/8bitmp3",
"followers_url": "https://api.github.com/users/8bitmp... | [] | closed | false | null | [] | null | 0 | 2021-01-06T02:14:38 | 2021-01-16T23:30:47 | 2021-01-07T10:06:32 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1691",
"html_url": "https://github.com/huggingface/datasets/pull/1691",
"diff_url": "https://github.com/huggingface/datasets/pull/1691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1691.patch",
"merged_at": "2021-01-07T10:06... | Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps.

| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1691/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1691/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1690/comments | https://api.github.com/repos/huggingface/datasets/issues/1690/events | https://github.com/huggingface/datasets/pull/1690 | 779,441,631 | MDExOlB1bGxSZXF1ZXN0NTQ5NDEwOTgw | 1,690 | Fast start up | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-05T19:07:53 | 2021-01-06T14:20:59 | 2021-01-06T14:20:58 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1690",
"html_url": "https://github.com/huggingface/datasets/pull/1690",
"diff_url": "https://github.com/huggingface/datasets/pull/1690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1690.patch",
"merged_at": "2021-01-06T14:20... | Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies.
To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1690/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 3,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1690/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1689/comments | https://api.github.com/repos/huggingface/datasets/issues/1689/events | https://github.com/huggingface/datasets/pull/1689 | 779,107,313 | MDExOlB1bGxSZXF1ZXN0NTQ5MTEwMDgw | 1,689 | Fix ade_corpus_v2 config names | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-05T14:33:28 | 2021-01-05T14:55:09 | 2021-01-05T14:55:08 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1689",
"html_url": "https://github.com/huggingface/datasets/pull/1689",
"diff_url": "https://github.com/huggingface/datasets/pull/1689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1689.patch",
"merged_at": "2021-01-05T14:55... | There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them:
- Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification
- Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation
- Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1689/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1689/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1688/comments | https://api.github.com/repos/huggingface/datasets/issues/1688/events | https://github.com/huggingface/datasets/pull/1688 | 779,029,685 | MDExOlB1bGxSZXF1ZXN0NTQ5MDM5ODg0 | 1,688 | Fix DaNE last example | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-05T13:29:37 | 2021-01-05T14:00:15 | 2021-01-05T14:00:13 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1688",
"html_url": "https://github.com/huggingface/datasets/pull/1688",
"diff_url": "https://github.com/huggingface/datasets/pull/1688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1688.patch",
"merged_at": "2021-01-05T14:00... | The last example from the DaNE dataset is empty.
Fix #1686 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1688/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1688/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1687/comments | https://api.github.com/repos/huggingface/datasets/issues/1687/events | https://github.com/huggingface/datasets/issues/1687 | 779,004,894 | MDU6SXNzdWU3NzkwMDQ4OTQ= | 1,687 | Question: Shouldn't .info be a part of DatasetDict? | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https... | [] | open | false | null | [] | null | 2 | 2021-01-05T13:08:41 | 2021-01-07T10:18:06 | null | CONTRIBUTOR | null | null | null | Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset("conll2002", "es")
>>> ds.info
Traceback (most rece... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1687/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1687/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1686/comments | https://api.github.com/repos/huggingface/datasets/issues/1686/events | https://github.com/huggingface/datasets/issues/1686 | 778,921,684 | MDU6SXNzdWU3Nzg5MjE2ODQ= | 1,686 | Dataset Error: DaNE contains empty samples at the end | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https... | [] | closed | false | null | [] | null | 3 | 2021-01-05T11:54:26 | 2021-01-05T14:01:09 | 2021-01-05T14:00:13 | CONTRIBUTOR | null | null | null | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1686/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1685/comments | https://api.github.com/repos/huggingface/datasets/issues/1685/events | https://github.com/huggingface/datasets/pull/1685 | 778,914,431 | MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2 | 1,685 | Update README.md of covid-tweets-japanese | {
"login": "forest1988",
"id": 2755894,
"node_id": "MDQ6VXNlcjI3NTU4OTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2755894?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/forest1988",
"html_url": "https://github.com/forest1988",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 1 | 2021-01-05T11:47:27 | 2021-01-06T10:27:12 | 2021-01-06T09:31:10 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1685",
"html_url": "https://github.com/huggingface/datasets/pull/1685",
"diff_url": "https://github.com/huggingface/datasets/pull/1685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1685.patch",
"merged_at": "2021-01-06T09:31... | Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402.
- Update "Data Splits" to be more precise that no information is provided for now.
- old: [More Information Needed]
- new: No information about data spl... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1685/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1685/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1684/comments | https://api.github.com/repos/huggingface/datasets/issues/1684/events | https://github.com/huggingface/datasets/pull/1684 | 778,356,196 | MDExOlB1bGxSZXF1ZXN0NTQ4NDU3NDY1 | 1,684 | Add CANER Corpus | {
"login": "KMFODA",
"id": 35491698,
"node_id": "MDQ6VXNlcjM1NDkxNjk4",
"avatar_url": "https://avatars.githubusercontent.com/u/35491698?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KMFODA",
"html_url": "https://github.com/KMFODA",
"followers_url": "https://api.github.com/users/KMFODA/fo... | [] | closed | false | null | [] | null | 0 | 2021-01-04T20:49:11 | 2021-01-25T09:09:20 | 2021-01-25T09:09:20 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1684",
"html_url": "https://github.com/huggingface/datasets/pull/1684",
"diff_url": "https://github.com/huggingface/datasets/pull/1684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1684.patch",
"merged_at": "2021-01-25T09:09... | What does this PR do?
Adds the following dataset:
https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus
Who can review?
@lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1684/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1684/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1683/comments | https://api.github.com/repos/huggingface/datasets/issues/1683/events | https://github.com/huggingface/datasets/issues/1683 | 778,287,612 | MDU6SXNzdWU3NzgyODc2MTI= | 1,683 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | {
"login": "abarbosa94",
"id": 6608232,
"node_id": "MDQ6VXNlcjY2MDgyMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abarbosa94",
"html_url": "https://github.com/abarbosa94",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2021-01-04T18:47:53 | 2021-01-04T19:04:45 | 2021-01-04T19:04:45 | CONTRIBUTOR | null | null | null | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRCon... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1683/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1683/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1682/comments | https://api.github.com/repos/huggingface/datasets/issues/1682/events | https://github.com/huggingface/datasets/pull/1682 | 778,268,156 | MDExOlB1bGxSZXF1ZXN0NTQ4Mzg1NTk1 | 1,682 | Don't use xlrd for xlsx files | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2021-01-04T18:11:50 | 2021-01-04T18:13:14 | 2021-01-04T18:13:13 | MEMBER | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1682",
"html_url": "https://github.com/huggingface/datasets/pull/1682",
"diff_url": "https://github.com/huggingface/datasets/pull/1682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1682.patch",
"merged_at": "2021-01-04T18:13... | Since the latest release of `xlrd` (2.0), the support for xlsx files stopped.
Therefore we needed to use something else.
A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`.
I left the unused import of `openpyxl` in the dataset scripts to show users that ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1682/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1682/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1681/comments | https://api.github.com/repos/huggingface/datasets/issues/1681/events | https://github.com/huggingface/datasets/issues/1681 | 777,644,163 | MDU6SXNzdWU3Nzc2NDQxNjM= | 1,681 | Dataset "dane" missing | {
"login": "KennethEnevoldsen",
"id": 23721977,
"node_id": "MDQ6VXNlcjIzNzIxOTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/23721977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KennethEnevoldsen",
"html_url": "https://github.com/KennethEnevoldsen",
"followers_url": "https... | [] | closed | false | null | [] | null | 3 | 2021-01-03T14:03:03 | 2021-01-05T08:35:35 | 2021-01-05T08:35:13 | CONTRIBUTOR | null | null | null | the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1681/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1681/timeline | null |
https://api.github.com/repos/huggingface/datasets/issues/1680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1680/comments | https://api.github.com/repos/huggingface/datasets/issues/1680/events | https://github.com/huggingface/datasets/pull/1680 | 777,623,053 | MDExOlB1bGxSZXF1ZXN0NTQ3ODY4MjEw | 1,680 | added TurkishProductReviews dataset | {
"login": "basakbuluz",
"id": 41359672,
"node_id": "MDQ6VXNlcjQxMzU5Njcy",
"avatar_url": "https://avatars.githubusercontent.com/u/41359672?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/basakbuluz",
"html_url": "https://github.com/basakbuluz",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2021-01-03T11:52:59 | 2021-01-04T18:15:35 | 2021-01-04T18:15:35 | CONTRIBUTOR | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1680",
"html_url": "https://github.com/huggingface/datasets/pull/1680",
"diff_url": "https://github.com/huggingface/datasets/pull/1680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1680.patch",
"merged_at": "2021-01-04T18:15... | This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**.
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** Fatih Barmanbay - @fthbrmnby | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/1680/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/1680/timeline | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.