id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
791,486,860
https://api.github.com/repos/huggingface/datasets/issues/1764
https://github.com/huggingface/datasets/issues/1764
1,764
Connection Issues
closed
1
2021-01-21T20:56:09
2021-01-21T21:00:19
2021-01-21T21:00:02
SaeedNajafi
[]
Today, I am getting connection issues while loading a dataset and the metric. ``` Traceback (most recent call last): File "src/train.py", line 180, in <module> train_dataset, dev_dataset, test_dataset = create_race_dataset() File "src/train.py", line 130, in create_race_dataset train_dataset = load_da...
false
791,389,763
https://api.github.com/repos/huggingface/datasets/issues/1763
https://github.com/huggingface/datasets/pull/1763
1,763
PAWS-X: Fix csv Dictreader splitting data on quotes
closed
0
2021-01-21T18:21:01
2021-01-22T10:14:33
2021-01-22T10:13:45
gowtham1997
[]
```python from datasets import load_dataset # load english paws-x dataset datasets = load_dataset('paws-x', 'en') print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1] ...
true
791,226,007
https://api.github.com/repos/huggingface/datasets/issues/1762
https://github.com/huggingface/datasets/issues/1762
1,762
Unable to format dataset to CUDA Tensors
closed
6
2021-01-21T15:31:23
2021-02-02T07:13:22
2021-02-02T07:13:22
gchhablani
[]
Hi, I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors. I tried this, but Dataset doesn't suppor...
false
791,150,858
https://api.github.com/repos/huggingface/datasets/issues/1761
https://github.com/huggingface/datasets/pull/1761
1,761
Add SILICONE benchmark
closed
8
2021-01-21T14:29:12
2021-02-04T14:32:48
2021-01-26T13:50:31
eusip
[]
My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication. This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
true
791,110,857
https://api.github.com/repos/huggingface/datasets/issues/1760
https://github.com/huggingface/datasets/pull/1760
1,760
More tags
closed
2
2021-01-21T13:50:10
2021-01-22T09:40:01
2021-01-22T09:40:00
lhoestq
[]
Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code)
true
790,992,226
https://api.github.com/repos/huggingface/datasets/issues/1759
https://github.com/huggingface/datasets/issues/1759
1,759
wikipedia dataset incomplete
closed
4
2021-01-21T11:47:15
2021-01-21T17:22:11
2021-01-21T17:21:06
ChrisDelClea
[]
Hey guys, I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset. Unfortunately, I found out that there is an incompleteness for the German dataset. For reasons unknown to me, the number of inhabitants has been removed from many pages: Thorey-sur-Ouche has 128 inhabitants a...
false
790,626,116
https://api.github.com/repos/huggingface/datasets/issues/1758
https://github.com/huggingface/datasets/issues/1758
1,758
dataset.search() (elastic) cannot reliably retrieve search results
closed
2
2021-01-21T02:26:37
2021-01-22T00:25:50
2021-01-22T00:25:50
afogarty85
[]
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices. The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer. I am indexing data t...
false
790,466,509
https://api.github.com/repos/huggingface/datasets/issues/1757
https://github.com/huggingface/datasets/issues/1757
1,757
FewRel
closed
5
2021-01-20T23:56:03
2021-03-09T02:52:05
2021-03-08T14:34:52
dspoka
[ "dataset request" ]
## Adding a Dataset - **Name:** FewRel - **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset - **Paper:** @inproceedings{han2018fewrel, title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation}, auth...
false
790,380,028
https://api.github.com/repos/huggingface/datasets/issues/1756
https://github.com/huggingface/datasets/issues/1756
1,756
Ccaligned multilingual translation dataset
closed
0
2021-01-20T22:18:44
2021-03-01T10:36:21
2021-03-01T10:36:21
flozi00
[ "dataset request" ]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ...
false
790,324,734
https://api.github.com/repos/huggingface/datasets/issues/1755
https://github.com/huggingface/datasets/issues/1755
1,755
Using select/reordering datasets slows operations down immensely
closed
2
2021-01-20T21:12:12
2021-01-20T22:03:39
2021-01-20T22:03:39
afogarty85
[]
I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour. The below examp...
false
789,881,730
https://api.github.com/repos/huggingface/datasets/issues/1754
https://github.com/huggingface/datasets/pull/1754
1,754
Use a config id in the cache directory names for custom configs
closed
0
2021-01-20T11:11:00
2021-01-25T09:12:07
2021-01-25T09:12:06
lhoestq
[]
As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config. For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes: ```python from ...
true
789,867,685
https://api.github.com/repos/huggingface/datasets/issues/1753
https://github.com/huggingface/datasets/pull/1753
1,753
fix comet citations
closed
0
2021-01-20T10:52:38
2021-01-20T14:39:30
2021-01-20T14:39:30
ricardorei
[]
I realized COMET citations were not showing in the hugging face metrics page: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png"> This pull request is intended to fix that. Thanks!
true
789,822,459
https://api.github.com/repos/huggingface/datasets/issues/1752
https://github.com/huggingface/datasets/pull/1752
1,752
COMET metric citation
closed
1
2021-01-20T09:54:43
2021-01-20T10:27:07
2021-01-20T10:25:02
ricardorei
[]
In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website: <img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8...
true
789,232,980
https://api.github.com/repos/huggingface/datasets/issues/1751
https://github.com/huggingface/datasets/pull/1751
1,751
Updated README for the Social Bias Frames dataset
closed
0
2021-01-19T17:53:00
2021-01-20T14:56:52
2021-01-20T14:56:52
mcmillanmajora
[]
See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download.
true
788,668,085
https://api.github.com/repos/huggingface/datasets/issues/1750
https://github.com/huggingface/datasets/pull/1750
1,750
Fix typo in README.md of cnn_dailymail
closed
2
2021-01-19T03:06:05
2021-01-19T11:07:29
2021-01-19T09:48:43
forest1988
[]
When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`. I am afraid this is a trivial matter, but I would like to make a suggestion for revision.
true
788,476,639
https://api.github.com/repos/huggingface/datasets/issues/1749
https://github.com/huggingface/datasets/pull/1749
1,749
Added metadata and correct splits for swda.
closed
2
2021-01-18T18:36:32
2021-01-29T19:35:52
2021-01-29T18:38:08
gmihaila
[]
Switchboard Dialog Act Corpus I made some changes following @bhavitvyamalik recommendation in #1678: * Contains all metadata. * Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo. * Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur...
true
788,431,642
https://api.github.com/repos/huggingface/datasets/issues/1748
https://github.com/huggingface/datasets/pull/1748
1,748
add Stuctured Argument Extraction for Korean dataset
closed
0
2021-01-18T17:14:19
2021-09-17T16:53:18
2021-01-19T11:26:58
stevhliu
[]
true
788,299,775
https://api.github.com/repos/huggingface/datasets/issues/1747
https://github.com/huggingface/datasets/issues/1747
1,747
datasets slicing with seed
closed
2
2021-01-18T14:08:55
2022-10-05T12:37:27
2022-10-05T12:37:27
ghost
[]
Hi I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html I could not find a seed option, could you assist me please how I can get a slice for different seeds? thank you. @lhoestq
false
788,188,184
https://api.github.com/repos/huggingface/datasets/issues/1746
https://github.com/huggingface/datasets/pull/1746
1,746
Fix release conda worflow
closed
0
2021-01-18T11:29:10
2021-01-18T11:31:24
2021-01-18T11:31:23
lhoestq
[]
The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110
true
787,838,256
https://api.github.com/repos/huggingface/datasets/issues/1745
https://github.com/huggingface/datasets/issues/1745
1,745
difference between wsc and wsc.fixed for superglue
closed
1
2021-01-18T00:50:19
2021-01-18T11:02:43
2021-01-18T00:59:34
ghost
[]
Hi I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq
false
787,649,811
https://api.github.com/repos/huggingface/datasets/issues/1744
https://github.com/huggingface/datasets/pull/1744
1,744
Add missing "brief" entries to reuters
closed
2
2021-01-17T07:58:49
2021-01-18T11:26:09
2021-01-18T11:26:09
jbragg
[]
This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)`
true
787,631,412
https://api.github.com/repos/huggingface/datasets/issues/1743
https://github.com/huggingface/datasets/issues/1743
1,743
Issue while Creating Custom Metric
closed
3
2021-01-17T07:01:14
2022-06-01T15:49:34
2022-06-01T15:49:34
gchhablani
[]
Hi Team, I am trying to create a custom metric for my training as follows, where f1 is my own metric: ```python def _info(self): # TODO: Specifies the datasets.MetricInfo object return datasets.MetricInfo( # This is the description that will appear on the metrics page. ...
false
787,623,640
https://api.github.com/repos/huggingface/datasets/issues/1742
https://github.com/huggingface/datasets/pull/1742
1,742
Add GLUE Compat (compatible with transformers<3.5.0)
closed
2
2021-01-17T05:54:25
2023-09-24T09:52:12
2021-03-29T12:43:30
JetRunner
[]
Link to our discussion on Slack (HF internal) https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400 The next step is to add a compatible option in the new `run_glue.py` I duplicated `glue` and made the following changes: 1. Change the name to `glue_compat`. 2. Change the label assignments for MN...
true
787,327,060
https://api.github.com/repos/huggingface/datasets/issues/1741
https://github.com/huggingface/datasets/issues/1741
1,741
error when run fine_tuning on text_classification
closed
1
2021-01-16T02:23:19
2021-01-16T02:39:28
2021-01-16T02:39:18
XiaoYang66
[]
dataset:sem_eval_2014_task_1 pretrained_model:bert-base-uncased error description: when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc...
false
787,264,605
https://api.github.com/repos/huggingface/datasets/issues/1740
https://github.com/huggingface/datasets/pull/1740
1,740
add id_liputan6 dataset
closed
0
2021-01-15T22:58:34
2021-01-20T13:41:26
2021-01-20T13:41:26
cahya-wirawan
[]
id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679
true
787,219,138
https://api.github.com/repos/huggingface/datasets/issues/1739
https://github.com/huggingface/datasets/pull/1739
1,739
fixes and improvements for the WebNLG loader
closed
5
2021-01-15T21:45:23
2021-01-29T14:34:06
2021-01-29T10:53:03
Shimorina
[]
- fixes test sets loading in v3.0 - adds additional fields for v3.0_ru - adds info to the WebNLG data card
true
786,068,440
https://api.github.com/repos/huggingface/datasets/issues/1738
https://github.com/huggingface/datasets/pull/1738
1,738
Conda support
closed
3
2021-01-14T15:11:25
2021-01-15T10:08:20
2021-01-15T10:08:19
LysandreJik
[]
Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`). Will appear here: https://anaconda.org/huggingface/datasets Depends on `conda-forge` for now, so the following is required for installation: ``` conda install -c huggingface -c conda-forge datasets ```
true
785,606,286
https://api.github.com/repos/huggingface/datasets/issues/1737
https://github.com/huggingface/datasets/pull/1737
1,737
update link in TLC to be github links
closed
1
2021-01-14T02:49:21
2021-01-14T10:25:24
2021-01-14T10:25:24
chameleonTK
[]
Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
true
785,433,854
https://api.github.com/repos/huggingface/datasets/issues/1736
https://github.com/huggingface/datasets/pull/1736
1,736
Adjust BrWaC dataset features name
closed
0
2021-01-13T20:39:04
2021-01-14T10:29:38
2021-01-14T10:29:38
jonatasgrosman
[]
I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good. Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr...
true
785,184,740
https://api.github.com/repos/huggingface/datasets/issues/1735
https://github.com/huggingface/datasets/pull/1735
1,735
Update add new dataset template
closed
2
2021-01-13T15:08:09
2021-01-14T15:16:01
2021-01-14T15:16:00
sgugger
[]
This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work.
true
784,956,707
https://api.github.com/repos/huggingface/datasets/issues/1734
https://github.com/huggingface/datasets/pull/1734
1,734
Fix empty token bug for `thainer` and `lst20`
closed
0
2021-01-13T09:55:09
2021-01-14T10:42:18
2021-01-14T10:42:18
cstorm125
[]
add a condition to check if tokens exist before yielding in `thainer` and `lst20`
true
784,903,002
https://api.github.com/repos/huggingface/datasets/issues/1733
https://github.com/huggingface/datasets/issues/1733
1,733
connection issue with glue, what is the data url for glue?
closed
1
2021-01-13T08:37:40
2021-08-04T18:13:55
2021-08-04T18:13:55
ghost
[]
Hi my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not thanks
false
784,874,490
https://api.github.com/repos/huggingface/datasets/issues/1732
https://github.com/huggingface/datasets/pull/1732
1,732
[GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification.
closed
1
2021-01-13T07:50:19
2021-01-14T10:19:41
2021-01-14T10:19:41
mounicam
[]
We want to use TurkCorpus for validation and testing of the sentence simplification task.
true
784,744,674
https://api.github.com/repos/huggingface/datasets/issues/1731
https://github.com/huggingface/datasets/issues/1731
1,731
Couldn't reach swda.py
closed
2
2021-01-13T02:57:40
2021-01-13T11:17:40
2021-01-13T11:17:40
yangp725
[]
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
false
784,617,525
https://api.github.com/repos/huggingface/datasets/issues/1730
https://github.com/huggingface/datasets/pull/1730
1,730
Add MNIST dataset
closed
0
2021-01-12T21:48:02
2021-01-13T10:19:47
2021-01-13T10:19:46
sgugger
[]
This PR adds the MNIST dataset to the library.
true
784,565,898
https://api.github.com/repos/huggingface/datasets/issues/1729
https://github.com/huggingface/datasets/issues/1729
1,729
Is there support for Deep learning datasets?
closed
1
2021-01-12T20:22:41
2021-03-31T04:24:07
2021-03-31T04:24:07
pablodz
[]
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
false
784,458,342
https://api.github.com/repos/huggingface/datasets/issues/1728
https://github.com/huggingface/datasets/issues/1728
1,728
Add an entry to an arrow dataset
closed
5
2021-01-12T18:01:47
2021-01-18T19:15:32
2021-01-18T19:15:32
ameet-1997
[]
Is it possible to add an entry to a dataset object? **Motivation: I want to transform the sentences in the dataset and add them to the original dataset** For example, say we have the following code: ``` python from datasets import load_dataset # Load a dataset and print the first examples in the training s...
false
784,435,131
https://api.github.com/repos/huggingface/datasets/issues/1727
https://github.com/huggingface/datasets/issues/1727
1,727
BLEURT score calculation raises UnrecognizedFlagError
closed
10
2021-01-12T17:27:02
2022-06-01T16:06:02
2022-06-01T16:06:02
nadavo
[]
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`. My environment: ``` python==3.8.5 datasets==1.2.0 tensorflow==2.3.1 cudatoolkit==11.0.221 ``` Test code for reproducing the error: ``` from datasets import load_metric bleurt = load_me...
false
784,336,370
https://api.github.com/repos/huggingface/datasets/issues/1726
https://github.com/huggingface/datasets/pull/1726
1,726
Offline loading
closed
6
2021-01-12T15:21:57
2022-02-15T10:32:10
2021-01-19T16:42:32
lhoestq
[]
As discussed in #824 it would be cool to make the library work in offline mode. Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError. This is because `prepare_module` fetches online for the latest vers...
true
784,182,273
https://api.github.com/repos/huggingface/datasets/issues/1725
https://github.com/huggingface/datasets/issues/1725
1,725
load the local dataset
closed
7
2021-01-12T12:12:55
2022-06-01T16:00:59
2022-06-01T16:00:59
xinjicong
[]
your guidebook's example is like >>>from datasets import load_dataset >>> dataset = load_dataset('json', data_files='my_file.json') but the first arg is path... so how should i do if i want to load the local dataset for model training? i will be grateful if you can help me handle this problem! thanks a lot!
false
783,982,100
https://api.github.com/repos/huggingface/datasets/issues/1723
https://github.com/huggingface/datasets/pull/1723
1,723
ADD S3 support for downloading and uploading processed datasets
closed
1
2021-01-12T07:17:34
2021-01-26T17:02:08
2021-01-26T17:02:08
philschmid
[]
# What does this PR do? This PR adds the functionality to load and save `datasets` from and to s3. You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`. You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`. Lo...
true
784,023,338
https://api.github.com/repos/huggingface/datasets/issues/1724
https://github.com/huggingface/datasets/issues/1724
1,724
could not run models on a offline server successfully
closed
6
2021-01-12T06:08:06
2022-10-05T12:39:07
2022-10-05T12:39:07
lkcao
[]
Hi, I really need your help about this. I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows: ![image](https://us...
false
783,921,679
https://api.github.com/repos/huggingface/datasets/issues/1722
https://github.com/huggingface/datasets/pull/1722
1,722
Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task.
closed
1
2021-01-12T05:26:04
2021-01-12T18:14:53
2021-01-12T17:35:57
mounicam
[]
true
783,828,428
https://api.github.com/repos/huggingface/datasets/issues/1721
https://github.com/huggingface/datasets/pull/1721
1,721
[Scientific papers] Mirror datasets zip
closed
4
2021-01-12T01:15:40
2021-01-12T11:49:15
2021-01-12T11:41:47
patrickvonplaten
[]
Datasets were uploading to https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/arxiv-dataset.zip and https://s3.amazonaws.com/datasets.huggingface.co/scientific_papers/1.1.1/pubmed-dataset.zip respectively to escape google drive quota and enable faster download.
true
783,721,833
https://api.github.com/repos/huggingface/datasets/issues/1720
https://github.com/huggingface/datasets/pull/1720
1,720
Adding the NorNE dataset for NER
closed
13
2021-01-11T21:34:13
2021-03-31T14:23:49
2021-03-31T14:13:17
versae
[]
NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or...
true
783,557,542
https://api.github.com/repos/huggingface/datasets/issues/1719
https://github.com/huggingface/datasets/pull/1719
1,719
Fix column list comparison in transmit format
closed
0
2021-01-11T17:23:56
2021-01-11T18:45:03
2021-01-11T18:45:02
lhoestq
[]
As noticed in #1718 the cache might not reload the cache files when new columns were added. This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col...
true
783,474,753
https://api.github.com/repos/huggingface/datasets/issues/1718
https://github.com/huggingface/datasets/issues/1718
1,718
Possible cache miss in datasets
closed
18
2021-01-11T15:37:31
2022-06-29T14:54:42
2021-01-26T02:47:59
ofirzaf
[]
Hi, I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache. I have attached an example script that for me reproduces the problem. In the attached example the second map function always recomputes instead of loading fr...
false
783,074,255
https://api.github.com/repos/huggingface/datasets/issues/1717
https://github.com/huggingface/datasets/issues/1717
1,717
SciFact dataset - minor changes
closed
4
2021-01-11T05:26:40
2021-01-26T02:52:17
2021-01-26T02:52:17
dwadden
[]
Hi, SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated! I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this? It also looks like the dataset is being downloa...
false
782,819,006
https://api.github.com/repos/huggingface/datasets/issues/1716
https://github.com/huggingface/datasets/pull/1716
1,716
Add Hatexplain Dataset
closed
0
2021-01-10T13:30:01
2021-01-18T14:21:42
2021-01-18T14:21:42
kushal2000
[]
Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue
true
782,754,441
https://api.github.com/repos/huggingface/datasets/issues/1715
https://github.com/huggingface/datasets/pull/1715
1,715
add Korean intonation-aided intention identification dataset
closed
0
2021-01-10T06:29:04
2021-09-17T16:54:13
2021-01-12T17:14:33
stevhliu
[]
true
782,416,276
https://api.github.com/repos/huggingface/datasets/issues/1714
https://github.com/huggingface/datasets/pull/1714
1,714
Adding adversarialQA dataset
closed
5
2021-01-08T21:46:09
2021-01-13T16:05:24
2021-01-13T16:05:24
maxbartolo
[]
Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293)
true
782,337,723
https://api.github.com/repos/huggingface/datasets/issues/1713
https://github.com/huggingface/datasets/issues/1713
1,713
Installation using conda
closed
5
2021-01-08T19:12:15
2021-09-17T12:47:40
2021-09-17T12:47:40
pranav-s
[]
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and...
false
782,313,097
https://api.github.com/repos/huggingface/datasets/issues/1712
https://github.com/huggingface/datasets/pull/1712
1,712
Silicone
closed
6
2021-01-08T18:24:18
2021-01-21T14:12:37
2021-01-21T10:31:11
eusip
[]
My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication.
true
782,129,083
https://api.github.com/repos/huggingface/datasets/issues/1711
https://github.com/huggingface/datasets/pull/1711
1,711
Fix windows path scheme in cached path
closed
0
2021-01-08T13:45:56
2021-01-11T09:23:20
2021-01-11T09:23:19
lhoestq
[]
As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete. I fixed this and added tests
true
781,914,951
https://api.github.com/repos/huggingface/datasets/issues/1710
https://github.com/huggingface/datasets/issues/1710
1,710
IsADirectoryError when trying to download C4
closed
2
2021-01-08T07:31:30
2022-08-04T11:56:10
2022-08-04T11:55:04
fredriko
[]
**TLDR**: I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure. How can the problem be fixed? **VERBOSE**: I use Python version 3.7 and have the following dependencies listed in my project: ``` datasets==1.2.0 apache-beam==2.26.0 ``` When runn...
false
781,875,640
https://api.github.com/repos/huggingface/datasets/issues/1709
https://github.com/huggingface/datasets/issues/1709
1,709
Databases
closed
0
2021-01-08T06:14:03
2021-01-08T09:00:08
2021-01-08T09:00:08
JimmyJim1
[]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
false
781,631,455
https://api.github.com/repos/huggingface/datasets/issues/1708
https://github.com/huggingface/datasets/issues/1708
1,708
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
closed
0
2021-01-07T21:45:24
2021-01-08T09:00:01
2021-01-08T09:00:01
Louiejay54
[]
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
false
781,507,545
https://api.github.com/repos/huggingface/datasets/issues/1707
https://github.com/huggingface/datasets/pull/1707
1,707
Added generated READMEs for datasets that were missing one.
closed
1
2021-01-07T18:10:06
2021-01-18T14:32:33
2021-01-18T14:32:33
madlag
[]
This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible. Code is available here for the moment: https://github.com/madlag/datasets...
true
781,494,476
https://api.github.com/repos/huggingface/datasets/issues/1706
https://github.com/huggingface/datasets/issues/1706
1,706
Error when downloading a large dataset on slow connection.
open
1
2021-01-07T17:48:15
2021-01-13T10:35:02
null
lucadiliello
[]
I receive the following error after about an hour trying to download the `openwebtext` dataset. The code used is: ```python import datasets datasets.load_dataset("openwebtext") ``` > Traceback (most recent call last): ...
false
781,474,949
https://api.github.com/repos/huggingface/datasets/issues/1705
https://github.com/huggingface/datasets/pull/1705
1,705
Add information about caching and verifications in "Load a Dataset" docs
closed
0
2021-01-07T17:18:44
2021-01-12T14:08:01
2021-01-12T14:08:01
SBrandeis
[ "documentation" ]
Related to #215. Missing improvements from @lhoestq's #1703.
true
781,402,757
https://api.github.com/repos/huggingface/datasets/issues/1704
https://github.com/huggingface/datasets/pull/1704
1,704
Update XSUM Factuality DatasetCard
closed
0
2021-01-07T15:37:14
2021-01-12T13:30:04
2021-01-12T13:30:04
vineeths96
[]
Update XSUM Factuality DatasetCard
true
781,395,146
https://api.github.com/repos/huggingface/datasets/issues/1703
https://github.com/huggingface/datasets/pull/1703
1,703
Improvements regarding caching and fingerprinting
closed
8
2021-01-07T15:26:29
2021-01-19T17:32:11
2021-01-19T17:32:10
lhoestq
[]
This PR adds these features: - Enable/disable caching If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets. It is equivalent to setting `load_from_cache` to `False` in dataset transforms. ```python from datasets import set_caching_enabled set_cach...
true
781,383,277
https://api.github.com/repos/huggingface/datasets/issues/1702
https://github.com/huggingface/datasets/pull/1702
1,702
Fix importlib metdata import in py38
closed
0
2021-01-07T15:10:30
2021-01-08T10:47:15
2021-01-08T10:47:15
lhoestq
[]
In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib.
true
781,345,717
https://api.github.com/repos/huggingface/datasets/issues/1701
https://github.com/huggingface/datasets/issues/1701
1,701
Some datasets miss dataset_infos.json or dummy_data.zip
closed
2
2021-01-07T14:17:13
2022-11-04T15:11:16
2022-11-04T15:06:00
madlag
[]
While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json : ``` c4 lm1b reclor wikihow ``` And some does not have a dummy_data.zip : ``` kor_nli math_dataset mlqa ms_marco newsgroup qa4mre qanga...
false
781,333,589
https://api.github.com/repos/huggingface/datasets/issues/1700
https://github.com/huggingface/datasets/pull/1700
1,700
Update Curiosity dialogs DatasetCard
closed
0
2021-01-07T13:59:27
2021-01-12T18:51:32
2021-01-12T18:51:32
vineeths96
[]
Update Curiosity dialogs DatasetCard There are some entries in the data fields section yet to be filled. There is little information regarding those fields.
true
781,271,558
https://api.github.com/repos/huggingface/datasets/issues/1699
https://github.com/huggingface/datasets/pull/1699
1,699
Update DBRD dataset card and download URL
closed
1
2021-01-07T12:16:43
2021-01-07T13:41:39
2021-01-07T13:40:59
benjaminvdb
[]
I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes: 1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316. 2. I've updated the dataset card. Cheers! 😄
true
781,152,561
https://api.github.com/repos/huggingface/datasets/issues/1698
https://github.com/huggingface/datasets/pull/1698
1,698
Update Coached Conv Pref DatasetCard
closed
1
2021-01-07T09:07:16
2021-01-08T17:04:33
2021-01-08T17:04:32
vineeths96
[]
Update Coached Conversation Preferance DatasetCard
true
781,126,579
https://api.github.com/repos/huggingface/datasets/issues/1697
https://github.com/huggingface/datasets/pull/1697
1,697
Update DialogRE DatasetCard
closed
1
2021-01-07T08:22:33
2021-01-07T13:34:28
2021-01-07T13:34:28
vineeths96
[]
Update the information in the dataset card for the Dialog RE dataset.
true
781,096,918
https://api.github.com/repos/huggingface/datasets/issues/1696
https://github.com/huggingface/datasets/issues/1696
1,696
Unable to install datasets
closed
4
2021-01-07T07:24:37
2021-01-08T00:33:05
2021-01-07T22:06:05
glee2429
[]
** Edit ** I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight! **Short description** I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev...
false
780,971,987
https://api.github.com/repos/huggingface/datasets/issues/1695
https://github.com/huggingface/datasets/pull/1695
1,695
fix ner_tag bugs in thainer
closed
1
2021-01-07T02:12:33
2021-01-07T14:43:45
2021-01-07T14:43:28
cstorm125
[]
fix bug that results in `ner_tag` always equal to 'O'.
true
780,429,080
https://api.github.com/repos/huggingface/datasets/issues/1694
https://github.com/huggingface/datasets/pull/1694
1,694
Add OSCAR
closed
10
2021-01-06T10:21:08
2021-01-25T09:10:33
2021-01-25T09:10:32
lhoestq
[]
Continuation of #348 The files have been moved to S3 and only the unshuffled version is available. Both original and deduplicated versions of each language are available. Example of usage: ```python from datasets import load_dataset oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="...
true
780,268,595
https://api.github.com/repos/huggingface/datasets/issues/1693
https://github.com/huggingface/datasets/pull/1693
1,693
Fix reuters metadata parsing errors
closed
0
2021-01-06T08:26:03
2021-01-07T23:53:47
2021-01-07T14:01:22
jbragg
[]
Was missing the last entry in each metadata category
true
779,882,271
https://api.github.com/repos/huggingface/datasets/issues/1691
https://github.com/huggingface/datasets/pull/1691
1,691
Updated HuggingFace Datasets README (fix typos)
closed
0
2021-01-06T02:14:38
2021-01-16T23:30:47
2021-01-07T10:06:32
8bitmp3
[]
Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps. ![](https://emojipedia-us.s3.dualstack.us-west-1.amazonaws.com/thumbs/160/google/56/hugging-face_1f917.png)
true
779,441,631
https://api.github.com/repos/huggingface/datasets/issues/1690
https://github.com/huggingface/datasets/pull/1690
1,690
Fast start up
closed
0
2021-01-05T19:07:53
2021-01-06T14:20:59
2021-01-06T14:20:58
lhoestq
[]
Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies. To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is ...
true
779,107,313
https://api.github.com/repos/huggingface/datasets/issues/1689
https://github.com/huggingface/datasets/pull/1689
1,689
Fix ade_corpus_v2 config names
closed
0
2021-01-05T14:33:28
2021-01-05T14:55:09
2021-01-05T14:55:08
lhoestq
[]
There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them: - Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification - Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation - Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation
true
779,029,685
https://api.github.com/repos/huggingface/datasets/issues/1688
https://github.com/huggingface/datasets/pull/1688
1,688
Fix DaNE last example
closed
0
2021-01-05T13:29:37
2021-01-05T14:00:15
2021-01-05T14:00:13
lhoestq
[]
The last example from the DaNE dataset is empty. Fix #1686
true
779,004,894
https://api.github.com/repos/huggingface/datasets/issues/1687
https://github.com/huggingface/datasets/issues/1687
1,687
Question: Shouldn't .info be a part of DatasetDict?
open
2
2021-01-05T13:08:41
2021-01-07T10:18:06
null
KennethEnevoldsen
[]
Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets. For instance: ``` >>> ds = datasets.load_dataset("conll2002", "es") >>> ds.info Traceback (most rece...
false
778,921,684
https://api.github.com/repos/huggingface/datasets/issues/1686
https://github.com/huggingface/datasets/issues/1686
1,686
Dataset Error: DaNE contains empty samples at the end
closed
3
2021-01-05T11:54:26
2021-01-05T14:01:09
2021-01-05T14:00:13
KennethEnevoldsen
[]
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors. ```python >>> import datasets [...] >>> dataset = datasets.load_dataset("dane") [...] >>> dataset["test"][-1] {'dep_ids': [], 'dep_labels': ...
false
778,914,431
https://api.github.com/repos/huggingface/datasets/issues/1685
https://github.com/huggingface/datasets/pull/1685
1,685
Update README.md of covid-tweets-japanese
closed
1
2021-01-05T11:47:27
2021-01-06T10:27:12
2021-01-06T09:31:10
forest1988
[]
Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402. - Update "Data Splits" to be more precise that no information is provided for now. - old: [More Information Needed] - new: No information about data spl...
true
778,356,196
https://api.github.com/repos/huggingface/datasets/issues/1684
https://github.com/huggingface/datasets/pull/1684
1,684
Add CANER Corpus
closed
0
2021-01-04T20:49:11
2021-01-25T09:09:20
2021-01-25T09:09:20
KMFODA
[]
What does this PR do? Adds the following dataset: https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus Who can review? @lhoestq
true
778,287,612
https://api.github.com/repos/huggingface/datasets/issues/1683
https://github.com/huggingface/datasets/issues/1683
1,683
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
closed
2
2021-01-04T18:47:53
2021-01-04T19:04:45
2021-01-04T19:04:45
abarbosa94
[]
It seems to fail the final batch ): steps to reproduce: ``` from datasets import load_dataset from elasticsearch import Elasticsearch import torch from transformers import file_utils, set_seed from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast MAX_SEQ_LENGTH = 256 ctx_encoder = DPRCon...
false
778,268,156
https://api.github.com/repos/huggingface/datasets/issues/1682
https://github.com/huggingface/datasets/pull/1682
1,682
Don't use xlrd for xlsx files
closed
0
2021-01-04T18:11:50
2021-01-04T18:13:14
2021-01-04T18:13:13
lhoestq
[]
Since the latest release of `xlrd` (2.0), the support for xlsx files stopped. Therefore we needed to use something else. A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`. I left the unused import of `openpyxl` in the dataset scripts to show users that ...
true
777,644,163
https://api.github.com/repos/huggingface/datasets/issues/1681
https://github.com/huggingface/datasets/issues/1681
1,681
Dataset "dane" missing
closed
3
2021-01-03T14:03:03
2021-01-05T08:35:35
2021-01-05T08:35:13
KennethEnevoldsen
[]
the `dane` dataset appear to be missing in the latest version (1.1.3). ```python >>> import datasets >>> datasets.__version__ '1.1.3' >>> "dane" in datasets.list_datasets() True ``` As we can see it should be present, but doesn't seem to be findable when using `load_dataset`. ```python >>> datasets.load...
false
777,623,053
https://api.github.com/repos/huggingface/datasets/issues/1680
https://github.com/huggingface/datasets/pull/1680
1,680
added TurkishProductReviews dataset
closed
2
2021-01-03T11:52:59
2021-01-04T18:15:35
2021-01-04T18:15:35
basakbuluz
[]
This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**. - **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data) - **Point of Contact:** Fatih Barmanbay - @fthbrmnby
true
777,587,792
https://api.github.com/repos/huggingface/datasets/issues/1679
https://github.com/huggingface/datasets/issues/1679
1,679
Can't import cc100 dataset
closed
1
2021-01-03T07:12:56
2022-10-05T12:42:25
2022-10-05T12:42:25
alighofrani95
[]
There is some issue to import cc100 dataset. ``` from datasets import load_dataset dataset = load_dataset("cc100") ``` FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py During handling of the above exception, another exception occur...
false
777,567,920
https://api.github.com/repos/huggingface/datasets/issues/1678
https://github.com/huggingface/datasets/pull/1678
1,678
Switchboard Dialog Act Corpus added under `datasets/swda`
closed
8
2021-01-03T03:53:41
2021-01-08T18:09:21
2021-01-05T10:06:35
gmihaila
[]
Switchboard Dialog Act Corpus Intro: The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2, with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information about the associated turn. The SwDA project was undertaken at UC ...
true
777,553,383
https://api.github.com/repos/huggingface/datasets/issues/1677
https://github.com/huggingface/datasets/pull/1677
1,677
Switchboard Dialog Act Corpus added under `datasets/swda`
closed
1
2021-01-03T01:16:42
2021-01-03T02:55:57
2021-01-03T02:55:56
gmihaila
[]
Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**. I think this is an important datasets to be added since it is the only one related to dialogue act classification. Hope the pull request is ok. Wasn't able to see any special formatting for the pull request form. The Swi...
true
777,477,645
https://api.github.com/repos/huggingface/datasets/issues/1676
https://github.com/huggingface/datasets/pull/1676
1,676
new version of Ted Talks IWSLT (WIT3)
closed
3
2021-01-02T15:30:03
2021-01-14T10:10:19
2021-01-14T10:10:19
skyprince999
[]
In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!! Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually. Locally I was able to clear the `python dataset...
true
777,367,320
https://api.github.com/repos/huggingface/datasets/issues/1675
https://github.com/huggingface/datasets/issues/1675
1,675
Add the 800GB Pile dataset?
closed
7
2021-01-01T22:58:12
2021-12-01T15:29:07
2021-12-01T15:29:07
lewtun
[ "dataset request" ]
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
false
777,321,840
https://api.github.com/repos/huggingface/datasets/issues/1674
https://github.com/huggingface/datasets/issues/1674
1,674
dutch_social can't be loaded
closed
8
2021-01-01T17:37:08
2022-10-05T13:03:26
2022-10-05T13:03:26
koenvandenberge
[]
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
false
777,263,651
https://api.github.com/repos/huggingface/datasets/issues/1673
https://github.com/huggingface/datasets/issues/1673
1,673
Unable to Download Hindi Wikipedia Dataset
closed
6
2021-01-01T10:52:53
2021-01-05T10:22:12
2021-01-05T10:22:12
aditya3498
[]
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
false
777,258,941
https://api.github.com/repos/huggingface/datasets/issues/1672
https://github.com/huggingface/datasets/issues/1672
1,672
load_dataset hang on file_lock
closed
3
2021-01-01T10:25:07
2021-03-31T16:24:13
2021-01-01T11:47:36
tomacai
[]
I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab. Transformers: 3.3.1 Datasets: 1.0.2 Windows 10 (also tested in WSL) ``` datasets.logging.set_verbosity_debug() datasets. train_dataset = load_dataset('squad', split='train') valid_dataset = load_dataset('squad', split='validat...
false
776,652,193
https://api.github.com/repos/huggingface/datasets/issues/1671
https://github.com/huggingface/datasets/issues/1671
1,671
connection issue
closed
2
2020-12-30T21:56:20
2022-10-05T12:42:12
2022-10-05T12:42:12
rabeehkarimimahabadi
[]
Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r...
false
776,608,579
https://api.github.com/repos/huggingface/datasets/issues/1670
https://github.com/huggingface/datasets/issues/1670
1,670
wiki_dpr pre-processing performance
open
3
2020-12-30T19:41:43
2021-01-28T09:41:36
null
dbarnhart
[ "enhancement", "Dataset discussion" ]
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
false
776,608,386
https://api.github.com/repos/huggingface/datasets/issues/1669
https://github.com/huggingface/datasets/issues/1669
1,669
wiki_dpr dataset pre-processesing performance
closed
1
2020-12-30T19:41:09
2020-12-30T19:42:25
2020-12-30T19:42:25
dbarnhart
[]
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
false
776,552,854
https://api.github.com/repos/huggingface/datasets/issues/1668
https://github.com/huggingface/datasets/pull/1668
1,668
xed_en_fi dataset Cleanup
closed
0
2020-12-30T17:11:18
2020-12-30T17:22:44
2020-12-30T17:22:43
lhoestq
[]
Fix ClassLabel feature type and minor mistakes in the dataset card
true
776,446,658
https://api.github.com/repos/huggingface/datasets/issues/1667
https://github.com/huggingface/datasets/pull/1667
1,667
Fix NER metric example in Overview notebook
closed
0
2020-12-30T13:05:19
2020-12-31T01:12:08
2020-12-30T17:21:51
jungwhank
[]
Fix errors in `NER metric example` section in `Overview.ipynb`. ``` --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-37-ee559b166e25> in <module>() ----> 1 ner_metric = load_metric('seqeval') ...
true
776,432,006
https://api.github.com/repos/huggingface/datasets/issues/1666
https://github.com/huggingface/datasets/pull/1666
1,666
Add language to dataset card for Makhzan dataset.
closed
0
2020-12-30T12:25:52
2020-12-30T17:20:35
2020-12-30T17:20:35
arkhalid
[]
Add language to dataset card.
true
776,431,087
https://api.github.com/repos/huggingface/datasets/issues/1665
https://github.com/huggingface/datasets/pull/1665
1,665
Add language to dataset card for Counter dataset.
closed
0
2020-12-30T12:23:20
2020-12-30T17:20:20
2020-12-30T17:20:20
arkhalid
[]
Add language.
true
775,956,441
https://api.github.com/repos/huggingface/datasets/issues/1664
https://github.com/huggingface/datasets/pull/1664
1,664
removed \n in labels
closed
0
2020-12-29T15:41:43
2020-12-30T17:18:49
2020-12-30T17:18:49
bhavitvyamalik
[]
updated social_i_qa labels as per #1633
true