id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
743,904,516 | https://api.github.com/repos/huggingface/datasets/issues/858 | https://github.com/huggingface/datasets/pull/858 | 858 | Add SemEval-2010 task 8 | closed | 1 | 2020-11-16T14:57:57 | 2020-11-26T17:28:55 | 2020-11-26T17:28:55 | JoelNiklaus | [] | Hi,
I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.
Cheers,
Joel | true |
743,863,214 | https://api.github.com/repos/huggingface/datasets/issues/857 | https://github.com/huggingface/datasets/pull/857 | 857 | Use pandas reader in csv | closed | 0 | 2020-11-16T14:05:45 | 2020-11-19T17:35:40 | 2020-11-19T17:35:38 | lhoestq | [] | The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ).
To fix that I switched to the pandas csv reader.
The new reader is compatible with all the pandas parameters to read csv files.
Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory.
Fix #836... | true |
743,799,239 | https://api.github.com/repos/huggingface/datasets/issues/856 | https://github.com/huggingface/datasets/pull/856 | 856 | Add open book corpus | closed | 21 | 2020-11-16T12:30:02 | 2024-01-04T13:20:51 | 2020-11-17T15:22:18 | vblagoje | [] | Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically... | true |
743,690,839 | https://api.github.com/repos/huggingface/datasets/issues/855 | https://github.com/huggingface/datasets/pull/855 | 855 | Fix kor nli csv reader | closed | 0 | 2020-11-16T09:53:41 | 2020-11-16T13:59:14 | 2020-11-16T13:59:12 | lhoestq | [] | The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason.
I fixed that by iterating through the lines directly instead of using a csv reader.
I also changed the feature names to match the other NLI datasets (i.e. use "premise"... | true |
743,675,376 | https://api.github.com/repos/huggingface/datasets/issues/854 | https://github.com/huggingface/datasets/issues/854 | 854 | wmt16 does not download | closed | 12 | 2020-11-16T09:31:51 | 2022-10-05T12:27:42 | 2022-10-05T12:27:42 | rabeehk | [
"dataset bug"
] | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | false |
743,426,583 | https://api.github.com/repos/huggingface/datasets/issues/853 | https://github.com/huggingface/datasets/issues/853 | 853 | concatenate_datasets support axis=0 or 1 ? | closed | 10 | 2020-11-16T02:46:23 | 2021-04-19T16:07:18 | 2021-04-19T16:07:18 | renqingcolin | [
"enhancement",
"help wanted",
"question"
] | I want to achieve the following result

| false |
743,396,240 | https://api.github.com/repos/huggingface/datasets/issues/852 | https://github.com/huggingface/datasets/issues/852 | 852 | wmt cannot be downloaded | closed | 0 | 2020-11-16T01:04:41 | 2020-11-16T09:31:58 | 2020-11-16T09:31:58 | rabeehk | [
"dataset request"
] | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | false |
742,369,419 | https://api.github.com/repos/huggingface/datasets/issues/850 | https://github.com/huggingface/datasets/pull/850 | 850 | Create ClassLabel for labelling tasks datasets | closed | 1 | 2020-11-13T11:07:22 | 2020-11-16T10:32:05 | 2020-11-16T10:31:58 | jplu | [] | This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking. | true |
742,263,333 | https://api.github.com/repos/huggingface/datasets/issues/849 | https://github.com/huggingface/datasets/issues/849 | 849 | Load amazon dataset | closed | 1 | 2020-11-13T08:34:24 | 2020-11-17T07:22:59 | 2020-11-17T07:22:59 | bhavitvyamalik | [] | Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
dataset = load_dataset("amaz... | false |
742,240,942 | https://api.github.com/repos/huggingface/datasets/issues/848 | https://github.com/huggingface/datasets/issues/848 | 848 | Error when concatenate_datasets | closed | 4 | 2020-11-13T07:56:02 | 2020-11-13T17:40:59 | 2020-11-13T15:55:10 | shexuan | [] | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
--------------... | false |
742,179,495 | https://api.github.com/repos/huggingface/datasets/issues/847 | https://github.com/huggingface/datasets/issues/847 | 847 | multiprocessing in dataset map "can only test a child process" | closed | 9 | 2020-11-13T06:01:04 | 2022-10-05T12:22:51 | 2022-10-05T12:22:51 | timothyjlaurent | [] | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | false |
741,885,174 | https://api.github.com/repos/huggingface/datasets/issues/846 | https://github.com/huggingface/datasets/issues/846 | 846 | Add HoVer multi-hop fact verification dataset | closed | 3 | 2020-11-12T19:55:46 | 2020-12-10T21:47:33 | 2020-12-10T21:47:33 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction... | false |
741,841,350 | https://api.github.com/repos/huggingface/datasets/issues/845 | https://github.com/huggingface/datasets/pull/845 | 845 | amazon description fields as bullets | closed | 0 | 2020-11-12T18:50:41 | 2020-11-12T18:50:54 | 2020-11-12T18:50:54 | joeddav | [] | One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown. | true |
741,835,661 | https://api.github.com/repos/huggingface/datasets/issues/844 | https://github.com/huggingface/datasets/pull/844 | 844 | add newlines to amazon desc | closed | 0 | 2020-11-12T18:41:20 | 2020-11-12T18:42:25 | 2020-11-12T18:42:21 | joeddav | [] | Just a quick formatting fix to hopefully make it render nicer on Viewer | true |
741,531,121 | https://api.github.com/repos/huggingface/datasets/issues/843 | https://github.com/huggingface/datasets/issues/843 | 843 | use_custom_baseline still produces errors for bertscore | closed | 5 | 2020-11-12T11:44:32 | 2024-05-28T16:30:17 | 2021-02-09T14:21:48 | penatbater | [
"metric bug"
] | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | false |
741,208,428 | https://api.github.com/repos/huggingface/datasets/issues/842 | https://github.com/huggingface/datasets/issues/842 | 842 | How to enable `.map()` pre-processing pipelines to support multi-node parallelism? | open | 5 | 2020-11-12T02:04:38 | 2025-03-26T09:10:22 | null | shangw-nvidia | [] | Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ... | false |
740,737,448 | https://api.github.com/repos/huggingface/datasets/issues/841 | https://github.com/huggingface/datasets/issues/841 | 841 | Can not reuse datasets already downloaded | closed | 2 | 2020-11-11T12:42:15 | 2020-11-11T18:17:16 | 2020-11-11T18:17:16 | jc-hou | [] | Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but... | false |
740,632,771 | https://api.github.com/repos/huggingface/datasets/issues/840 | https://github.com/huggingface/datasets/pull/840 | 840 | Update squad_v2.py | closed | 2 | 2020-11-11T09:58:41 | 2020-11-11T15:29:34 | 2020-11-11T15:26:35 | Javier-Jimenez99 | [] | Change lines 100 and 102 to prevent overwriting ```predictions``` variable. | true |
740,355,270 | https://api.github.com/repos/huggingface/datasets/issues/839 | https://github.com/huggingface/datasets/issues/839 | 839 | XSum dataset missing spaces between sentences | open | 0 | 2020-11-11T00:34:43 | 2020-11-11T00:34:43 | null | loganlebanoff | [] | I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like ... | false |
740,328,382 | https://api.github.com/repos/huggingface/datasets/issues/838 | https://github.com/huggingface/datasets/pull/838 | 838 | CNN/Dailymail Dataset Card | closed | 0 | 2020-11-10T23:56:43 | 2020-11-25T21:09:51 | 2020-11-25T21:09:50 | mcmillanmajora | [] | Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail
One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may... | true |
740,250,215 | https://api.github.com/repos/huggingface/datasets/issues/837 | https://github.com/huggingface/datasets/pull/837 | 837 | AlloCiné dataset card | closed | 0 | 2020-11-10T21:19:53 | 2020-11-25T21:56:27 | 2020-11-25T21:56:27 | mcmillanmajora | [] | Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creat... | true |
740,187,613 | https://api.github.com/repos/huggingface/datasets/issues/836 | https://github.com/huggingface/datasets/issues/836 | 836 | load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | closed | 8 | 2020-11-10T19:35:40 | 2021-11-24T16:59:19 | 2020-11-19T17:35:38 | randubin | [
"dataset bug"
] | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | false |
740,102,210 | https://api.github.com/repos/huggingface/datasets/issues/835 | https://github.com/huggingface/datasets/issues/835 | 835 | Wikipedia postprocessing | closed | 3 | 2020-11-10T17:26:38 | 2020-11-10T18:23:20 | 2020-11-10T17:49:21 | bminixhofer | [] | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir... | false |
740,082,890 | https://api.github.com/repos/huggingface/datasets/issues/834 | https://github.com/huggingface/datasets/issues/834 | 834 | [GEM] add WikiLingua cross-lingual abstractive summarization dataset | closed | 2 | 2020-11-10T17:00:43 | 2021-04-15T12:04:09 | 2021-04-15T12:01:38 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** h... | false |
740,079,692 | https://api.github.com/repos/huggingface/datasets/issues/833 | https://github.com/huggingface/datasets/issues/833 | 833 | [GEM] add ASSET text simplification dataset | closed | 0 | 2020-11-10T16:56:30 | 2020-12-03T13:38:15 | 2020-12-03T13:38:15 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** ASSET
- **Description:** ASSET is a crowdsourced
multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf
- **Dat... | false |
740,077,228 | https://api.github.com/repos/huggingface/datasets/issues/832 | https://github.com/huggingface/datasets/issues/832 | 832 | [GEM] add WikiAuto text simplification dataset | closed | 0 | 2020-11-10T16:53:23 | 2020-12-03T13:38:08 | 2020-12-03T13:38:08 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** WikiAuto
- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.70... | false |
740,071,697 | https://api.github.com/repos/huggingface/datasets/issues/831 | https://github.com/huggingface/datasets/issues/831 | 831 | [GEM] Add WebNLG dataset | closed | 0 | 2020-11-10T16:46:48 | 2020-12-03T13:38:01 | 2020-12-03T13:38:01 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** WebNLG
- **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian
- **Paper:** https://ww... | false |
740,065,376 | https://api.github.com/repos/huggingface/datasets/issues/830 | https://github.com/huggingface/datasets/issues/830 | 830 | [GEM] add ToTTo Table-to-text dataset | closed | 1 | 2020-11-10T16:38:34 | 2020-12-10T13:06:02 | 2020-12-10T13:06:01 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** ToTTo
- **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
- **Paper:** https://arxiv.o... | false |
740,061,699 | https://api.github.com/repos/huggingface/datasets/issues/829 | https://github.com/huggingface/datasets/issues/829 | 829 | [GEM] add Schema-Guided Dialogue | closed | 0 | 2020-11-10T16:33:44 | 2020-12-03T13:37:50 | 2020-12-03T13:37:50 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** The Schema-Guided Dialogue Dataset
- **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 d... | false |
740,008,683 | https://api.github.com/repos/huggingface/datasets/issues/828 | https://github.com/huggingface/datasets/pull/828 | 828 | Add writer_batch_size attribute to GeneratorBasedBuilder | closed | 0 | 2020-11-10T15:28:19 | 2020-11-10T16:27:36 | 2020-11-10T16:27:36 | lhoestq | [] | As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that. | true |
739,983,024 | https://api.github.com/repos/huggingface/datasets/issues/827 | https://github.com/huggingface/datasets/issues/827 | 827 | [GEM] MultiWOZ dialogue dataset | closed | 2 | 2020-11-10T14:57:50 | 2022-10-05T12:31:13 | 2022-10-05T12:31:13 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user... | false |
739,976,716 | https://api.github.com/repos/huggingface/datasets/issues/826 | https://github.com/huggingface/datasets/issues/826 | 826 | [GEM] Add E2E dataset | closed | 0 | 2020-11-10T14:50:40 | 2020-12-03T13:37:57 | 2020-12-03T13:37:57 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** E2E NLG dataset (for End-to-end natural language generation)
- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 refer... | false |
739,925,960 | https://api.github.com/repos/huggingface/datasets/issues/825 | https://github.com/huggingface/datasets/pull/825 | 825 | Add accuracy, precision, recall and F1 metrics | closed | 0 | 2020-11-10T13:50:35 | 2020-11-11T19:23:48 | 2020-11-11T19:23:43 | jplu | [] | This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only t... | true |
739,896,526 | https://api.github.com/repos/huggingface/datasets/issues/824 | https://github.com/huggingface/datasets/issues/824 | 824 | Discussion using datasets in offline mode | closed | 11 | 2020-11-10T13:10:51 | 2023-10-26T09:26:26 | 2022-02-15T10:32:36 | mandubian | [
"enhancement",
"generic discussion"
] | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | false |
739,815,763 | https://api.github.com/repos/huggingface/datasets/issues/823 | https://github.com/huggingface/datasets/issues/823 | 823 | how processing in batch works in datasets | closed | 3 | 2020-11-10T11:11:17 | 2020-11-10T13:11:10 | 2020-11-10T13:11:09 | rabeehkarimimahabadi | [
"dataset request"
] | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | false |
739,579,314 | https://api.github.com/repos/huggingface/datasets/issues/822 | https://github.com/huggingface/datasets/issues/822 | 822 | datasets freezes | closed | 2 | 2020-11-10T05:10:19 | 2023-07-20T16:08:14 | 2023-07-20T16:08:13 | rabeehkarimimahabadi | [
"dataset bug"
] | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_datase... | false |
739,506,859 | https://api.github.com/repos/huggingface/datasets/issues/821 | https://github.com/huggingface/datasets/issues/821 | 821 | `kor_nli` dataset doesn't being loaded properly | closed | 0 | 2020-11-10T02:04:12 | 2020-11-16T13:59:12 | 2020-11-16T13:59:12 | sackoh | [] | There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
-... | false |
739,387,617 | https://api.github.com/repos/huggingface/datasets/issues/820 | https://github.com/huggingface/datasets/pull/820 | 820 | Update quail dataset to v1.3 | closed | 0 | 2020-11-09T21:49:26 | 2020-11-10T09:06:35 | 2020-11-10T09:06:35 | ngdodd | [] | Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806). | true |
739,250,624 | https://api.github.com/repos/huggingface/datasets/issues/819 | https://github.com/huggingface/datasets/pull/819 | 819 | Make save function use deterministic global vars order | closed | 2 | 2020-11-09T18:12:03 | 2021-11-30T13:34:09 | 2020-11-11T15:20:51 | lhoestq | [] | The `dumps` function need to be deterministic for the caching mechanism.
However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.
I had to add a re... | true |
739,173,861 | https://api.github.com/repos/huggingface/datasets/issues/818 | https://github.com/huggingface/datasets/pull/818 | 818 | Fix type hints pickling in python 3.6 | closed | 0 | 2020-11-09T16:27:47 | 2020-11-10T09:07:03 | 2020-11-10T09:07:02 | lhoestq | [] | Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6
However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway.
The idea is just to implement the pickling/unpickling of parame... | true |
739,145,369 | https://api.github.com/repos/huggingface/datasets/issues/817 | https://github.com/huggingface/datasets/issues/817 | 817 | Add MRQA dataset | closed | 1 | 2020-11-09T15:52:19 | 2020-12-04T15:44:42 | 2020-12-04T15:44:41 | VictorSanh | [
"dataset request"
] | ## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. Th... | false |
739,102,686 | https://api.github.com/repos/huggingface/datasets/issues/816 | https://github.com/huggingface/datasets/issues/816 | 816 | [Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | closed | 1 | 2020-11-09T15:01:20 | 2020-11-11T15:20:50 | 2020-11-11T15:20:50 | lhoestq | [] | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementati... | false |
738,842,092 | https://api.github.com/repos/huggingface/datasets/issues/815 | https://github.com/huggingface/datasets/issues/815 | 815 | Is dataset iterative or not? | closed | 8 | 2020-11-09T09:11:48 | 2020-11-10T10:50:03 | 2020-11-10T10:50:03 | rabeehkarimimahabadi | [
"dataset request"
] | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | false |
738,500,443 | https://api.github.com/repos/huggingface/datasets/issues/814 | https://github.com/huggingface/datasets/issues/814 | 814 | Joining multiple datasets | closed | 1 | 2020-11-08T16:19:30 | 2020-11-08T19:38:48 | 2020-11-08T19:38:48 | rabeehkarimimahabadi | [
"dataset request"
] | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | false |
738,489,852 | https://api.github.com/repos/huggingface/datasets/issues/813 | https://github.com/huggingface/datasets/issues/813 | 813 | How to implement DistributedSampler with datasets | closed | 4 | 2020-11-08T15:27:11 | 2022-10-05T12:54:23 | 2022-10-05T12:54:23 | rabeehkarimimahabadi | [
"dataset request"
] | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d... | false |
738,340,217 | https://api.github.com/repos/huggingface/datasets/issues/812 | https://github.com/huggingface/datasets/issues/812 | 812 | Too much logging | closed | 7 | 2020-11-07T23:56:30 | 2021-01-26T14:31:34 | 2020-11-16T17:06:42 | dspoka | [] | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | false |
738,280,132 | https://api.github.com/repos/huggingface/datasets/issues/811 | https://github.com/huggingface/datasets/issues/811 | 811 | nlp viewer error | closed | 3 | 2020-11-07T17:08:58 | 2022-02-15T10:51:44 | 2022-02-14T15:24:20 | jc-hou | [
"nlp-viewer"
] | Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

| false |
737,878,370 | https://api.github.com/repos/huggingface/datasets/issues/810 | https://github.com/huggingface/datasets/pull/810 | 810 | Fix seqeval metric | closed | 0 | 2020-11-06T16:11:43 | 2020-11-09T14:04:29 | 2020-11-09T14:04:28 | sgugger | [] | The current seqeval metric returns the following error when computed:
```
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
102 scores = {}
103 for type_... | true |
737,832,701 | https://api.github.com/repos/huggingface/datasets/issues/809 | https://github.com/huggingface/datasets/issues/809 | 809 | Add Google Taskmaster dataset | closed | 2 | 2020-11-06T15:10:41 | 2021-04-20T13:09:26 | 2021-04-20T13:09:26 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation... | false |
737,638,942 | https://api.github.com/repos/huggingface/datasets/issues/808 | https://github.com/huggingface/datasets/pull/808 | 808 | dataset(dgs): initial dataset loading script | closed | 2 | 2020-11-06T10:14:43 | 2021-03-23T06:18:55 | 2021-03-23T06:18:55 | AmitMY | [] | When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. ... | true |
737,509,954 | https://api.github.com/repos/huggingface/datasets/issues/807 | https://github.com/huggingface/datasets/issues/807 | 807 | load_dataset for LOCAL CSV files report CONNECTION ERROR | closed | 11 | 2020-11-06T06:33:04 | 2021-01-11T01:30:27 | 2020-11-14T05:30:34 | shexuan | [] | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | false |
737,215,430 | https://api.github.com/repos/huggingface/datasets/issues/806 | https://github.com/huggingface/datasets/issues/806 | 806 | Quail dataset urls are out of date | closed | 3 | 2020-11-05T19:40:19 | 2020-11-10T14:02:51 | 2020-11-10T14:02:51 | ngdodd | [] | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.co... | false |
737,019,360 | https://api.github.com/repos/huggingface/datasets/issues/805 | https://github.com/huggingface/datasets/issues/805 | 805 | On loading a metric from datasets, I get the following error | closed | 1 | 2020-11-05T15:14:38 | 2022-02-14T15:32:59 | 2022-02-14T15:32:59 | laibamehnaz | [] | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | false |
736,858,507 | https://api.github.com/repos/huggingface/datasets/issues/804 | https://github.com/huggingface/datasets/issues/804 | 804 | Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | closed | 3 | 2020-11-05T11:38:01 | 2020-11-09T14:14:59 | 2020-11-09T14:14:58 | PaulLerner | [] | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tas... | false |
736,818,917 | https://api.github.com/repos/huggingface/datasets/issues/803 | https://github.com/huggingface/datasets/pull/803 | 803 | fix: typos in tutorial to map KILT and TriviaQA | closed | 0 | 2020-11-05T10:42:00 | 2020-11-10T09:08:07 | 2020-11-10T09:08:07 | PaulLerner | [] | true | |
736,296,343 | https://api.github.com/repos/huggingface/datasets/issues/802 | https://github.com/huggingface/datasets/pull/802 | 802 | Add XGlue | closed | 1 | 2020-11-04T17:29:54 | 2022-04-28T08:15:36 | 2020-12-01T15:58:27 | patrickvonplaten | [] | Dataset is ready to merge. An important feature of this dataset is that for each config the train data is in English, while dev and test data are in multiple languages. Therefore, @lhoestq and I decided offline that we will give the dataset the following API, *e.g.* for
```python
load_dataset("xglue", "ner") # wo... | true |
735,790,876 | https://api.github.com/repos/huggingface/datasets/issues/801 | https://github.com/huggingface/datasets/issues/801 | 801 | How to join two datasets? | closed | 3 | 2020-11-04T03:53:11 | 2020-12-23T14:02:58 | 2020-12-23T14:02:58 | shangw-nvidia | [] | Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is... | false |
735,772,775 | https://api.github.com/repos/huggingface/datasets/issues/800 | https://github.com/huggingface/datasets/pull/800 | 800 | Update loading_metrics.rst | closed | 0 | 2020-11-04T02:57:11 | 2020-11-11T15:28:32 | 2020-11-11T15:28:32 | ayushidalmia | [] | Minor bug | true |
735,551,165 | https://api.github.com/repos/huggingface/datasets/issues/799 | https://github.com/huggingface/datasets/pull/799 | 799 | switch amazon reviews class label order | closed | 0 | 2020-11-03T18:38:58 | 2020-11-03T18:44:14 | 2020-11-03T18:44:10 | joeddav | [] | Switches the label order to be more intuitive for amazon reviews, #791. | true |
735,518,805 | https://api.github.com/repos/huggingface/datasets/issues/798 | https://github.com/huggingface/datasets/issues/798 | 798 | Cannot load TREC dataset: ConnectionError | closed | 9 | 2020-11-03T17:45:22 | 2022-02-14T15:34:22 | 2022-02-14T15:34:22 | kaletap | [
"dataset bug"
] | ## Problem
I cannot load "trec" dataset, it results with ConnectionError as shown below. I've tried on both Google Colab and locally.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label')` returns <Response [302]>.
* `requests.head('http://cogcomp.org/Data/QA/QC/train_5500.label', allow_redirects=True... | false |
735,420,332 | https://api.github.com/repos/huggingface/datasets/issues/797 | https://github.com/huggingface/datasets/issues/797 | 797 | Token classification labels are strings and we don't have the list of labels | closed | 4 | 2020-11-03T15:33:30 | 2022-02-14T15:41:54 | 2022-02-14T15:41:53 | sgugger | [
"enhancement",
"Dataset discussion"
] | Not sure if this is an issue we want to fix or not, putting it here so it's not forgotten. Right now, in token classification datasets, the labels for NER, POS and the likes are typed as `Sequence` of `strings`, which is wrong in my opinion. These should be `Sequence` of `ClassLabel` or some types that gives easy acces... | false |
735,198,265 | https://api.github.com/repos/huggingface/datasets/issues/795 | https://github.com/huggingface/datasets/issues/795 | 795 | Descriptions of raw and processed versions of wikitext are inverted | closed | 2 | 2020-11-03T10:24:51 | 2022-02-14T15:46:21 | 2022-02-14T15:46:21 | fraboniface | [
"dataset bug"
] | Nothing of importance, but it looks like the descriptions of wikitext-n-v1 and wikitext-n-raw-v1 are inverted for both n=2 and n=103. I just verified by loading them and the `<unk>` tokens are present in the non-raw versions, which confirms that it's a mere inversion of the descriptions and not of the datasets themselv... | false |
735,158,725 | https://api.github.com/repos/huggingface/datasets/issues/794 | https://github.com/huggingface/datasets/issues/794 | 794 | self.options cannot be converted to a Python object for pickling | closed | 1 | 2020-11-03T09:27:34 | 2020-11-19T17:35:38 | 2020-11-19T17:35:38 | hzqjyyx | [
"bug"
] | Hi,
Currently I am trying to load csv file with customized read_options. And the latest master seems broken if we pass the ReadOptions object.
Here is a code snippet
```python
from datasets import load_dataset
from pyarrow.csv import ReadOptions
load_dataset("csv", data_files=["out.csv"], read_options=ReadOpt... | false |
735,105,907 | https://api.github.com/repos/huggingface/datasets/issues/793 | https://github.com/huggingface/datasets/pull/793 | 793 | [Datasets] fix discofuse links | closed | 0 | 2020-11-03T08:03:45 | 2020-11-03T08:16:41 | 2020-11-03T08:16:40 | patrickvonplaten | [] | The discofuse links were changed: https://github.com/google-research-datasets/discofuse/commit/d27641016eb5b3eb2af03c7415cfbb2cbebe8558.
The old links are broken
I changed the links and created the new dataset_infos.json.
Pinging @thomwolf @lhoestq for notification. | true |
734,693,652 | https://api.github.com/repos/huggingface/datasets/issues/792 | https://github.com/huggingface/datasets/issues/792 | 792 | KILT dataset: empty string in triviaqa input field | closed | 1 | 2020-11-02T17:33:54 | 2020-11-05T10:34:59 | 2020-11-05T10:34:59 | PaulLerner | [] | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/Pa... | false |
734,656,518 | https://api.github.com/repos/huggingface/datasets/issues/791 | https://github.com/huggingface/datasets/pull/791 | 791 | add amazon reviews | closed | 3 | 2020-11-02T16:42:57 | 2020-11-03T20:15:06 | 2020-11-03T16:43:57 | joeddav | [] | Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer | true |
734,470,197 | https://api.github.com/repos/huggingface/datasets/issues/790 | https://github.com/huggingface/datasets/issues/790 | 790 | Error running pip install -e ".[dev]" on MacOS 10.13.6: faiss/python does not exist | closed | 2 | 2020-11-02T12:36:35 | 2020-11-10T14:05:02 | 2020-11-10T14:05:02 | shawwn | [] | I was following along with https://huggingface.co/docs/datasets/share_dataset.html#adding-tests-and-metadata-to-the-dataset when I ran into this error.
```sh
git clone https://github.com/huggingface/datasets
cd datasets
virtualenv venv -p python3 --system-site-packages
source venv/bin/activate
pip install -e ".... | false |
734,237,839 | https://api.github.com/repos/huggingface/datasets/issues/789 | https://github.com/huggingface/datasets/pull/789 | 789 | dataset(ncslgr): add initial loading script | closed | 4 | 2020-11-02T06:50:10 | 2020-12-01T13:41:37 | 2020-12-01T13:41:36 | AmitMY | [] | Its a small dataset, but its heavily annotated
https://www.bu.edu/asllrp/ncslgr.html

| true |
734,136,124 | https://api.github.com/repos/huggingface/datasets/issues/788 | https://github.com/huggingface/datasets/issues/788 | 788 | failed to reuse cache | closed | 0 | 2020-11-02T02:42:36 | 2020-11-02T12:26:15 | 2020-11-02T12:26:15 | WangHexie | [] | I packed the `load_dataset ` in a function of class, and cached data in a directory. But when I import the class and use the function, the data still have to be downloaded again. The information (Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown si... | false |
734,070,162 | https://api.github.com/repos/huggingface/datasets/issues/787 | https://github.com/huggingface/datasets/pull/787 | 787 | Adding nli_tr dataset | closed | 1 | 2020-11-01T21:49:44 | 2020-11-12T19:06:02 | 2020-11-12T19:06:02 | e-budur | [] | Hello,
In this pull request, we have implemented the necessary interface to add our recent dataset [NLI-TR](https://github.com/boun-tabi/NLI-TR). The datasets will be presented on a full paper at EMNLP 2020 this month. [[arXiv link] ](https://arxiv.org/pdf/2004.14963.pdf)
The dataset is the neural machine transl... | true |
733,761,717 | https://api.github.com/repos/huggingface/datasets/issues/786 | https://github.com/huggingface/datasets/issues/786 | 786 | feat(dataset): multiprocessing _generate_examples | closed | 2 | 2020-10-31T16:52:16 | 2023-01-16T10:59:13 | 2023-01-16T10:59:13 | AmitMY | [] | forking this out of #741, this issue is only regarding multiprocessing
I'd love if there was a dataset configuration parameter `workers`, where when it is `1` it behaves as it does right now, and when its `>1` maybe `_generate_examples` can also get the `pool` and return an iterable using the pool.
In my use case... | false |
733,719,419 | https://api.github.com/repos/huggingface/datasets/issues/785 | https://github.com/huggingface/datasets/pull/785 | 785 | feat(aslg_pc12): add dev and test data splits | closed | 2 | 2020-10-31T13:25:38 | 2020-11-10T15:29:30 | 2020-11-10T15:29:30 | AmitMY | [] | For reproducibility sake, it's best if there are defined dev and test splits.
The original paper author did not define splits for the entire dataset, not for the sample loaded via this library, so I decided to define:
- 5/7th for train
- 1/7th for dev
- 1/7th for test
| true |
733,700,463 | https://api.github.com/repos/huggingface/datasets/issues/784 | https://github.com/huggingface/datasets/issues/784 | 784 | Issue with downloading Wikipedia data for low resource language | closed | 5 | 2020-10-31T11:40:00 | 2022-02-09T17:50:16 | 2020-11-25T15:42:13 | SamuelCahyawijaya | [] | Hi, I tried to download Sundanese and Javanese wikipedia data with the following snippet
```
jv_wiki = datasets.load_dataset('wikipedia', '20200501.jv', beam_runner='DirectRunner')
su_wiki = datasets.load_dataset('wikipedia', '20200501.su', beam_runner='DirectRunner')
```
And I get the following error for these tw... | false |
733,536,254 | https://api.github.com/repos/huggingface/datasets/issues/783 | https://github.com/huggingface/datasets/pull/783 | 783 | updated links to v1.3 of quail, fixed the description | closed | 1 | 2020-10-30T21:47:33 | 2020-11-29T23:05:19 | 2020-11-29T23:05:18 | annargrs | [] | updated links to v1.3 of quail, fixed the description | true |
733,316,463 | https://api.github.com/repos/huggingface/datasets/issues/782 | https://github.com/huggingface/datasets/pull/782 | 782 | Fix metric deletion when attribuets are missing | closed | 0 | 2020-10-30T16:16:10 | 2020-10-30T16:47:53 | 2020-10-30T16:47:52 | lhoestq | [] | When you call `del` on a metric we want to make sure that the arrow attributes are not already deleted.
I just added `if hasattr(...)` to make sure it doesn't crash | true |
733,168,609 | https://api.github.com/repos/huggingface/datasets/issues/781 | https://github.com/huggingface/datasets/pull/781 | 781 | Add XNLI train set | closed | 5 | 2020-10-30T13:21:53 | 2022-06-09T23:26:46 | 2020-11-09T18:22:49 | lhoestq | [] | I added the train set that was built using the translated MNLI.
Now you can load the dataset specifying one language:
```python
from datasets import load_dataset
xnli_en = load_dataset("xnli", "en")
print(xnli_en["train"][0])
# {'hypothesis': 'Product and geography are what make cream skimming work .', 'label':... | true |
732,738,647 | https://api.github.com/repos/huggingface/datasets/issues/780 | https://github.com/huggingface/datasets/pull/780 | 780 | Add ASNQ dataset | closed | 4 | 2020-10-29T23:31:56 | 2020-11-10T09:26:23 | 2020-11-10T09:26:23 | mkserge | [] | This pull request adds the ASNQ dataset. It is a dataset for answer sentence selection derived from Google Natural Questions (NQ) dataset (Kwiatkowski et al. 2019). The dataset details can be found in the paper at https://arxiv.org/abs/1911.04118
The dataset is authored by Siddhant Garg, Thuy Vu and Alessandro Mosch... | true |
732,514,887 | https://api.github.com/repos/huggingface/datasets/issues/779 | https://github.com/huggingface/datasets/pull/779 | 779 | Feature/fidelity metrics from emnlp2020 evaluating and characterizing human rationales | closed | 5 | 2020-10-29T17:31:14 | 2023-07-11T09:36:30 | 2023-07-11T09:36:30 | rathoreanirudh | [
"transfer-to-evaluate"
] | This metric computes fidelity (Yu et al. 2019, DeYoung et al. 2019) and normalized fidelity (Carton et al. 2020). | true |
732,449,652 | https://api.github.com/repos/huggingface/datasets/issues/778 | https://github.com/huggingface/datasets/issues/778 | 778 | Unexpected behavior when loading cached csv file? | closed | 2 | 2020-10-29T16:06:10 | 2020-10-29T21:21:27 | 2020-10-29T21:21:27 | dcfidalgo | [] | I read a csv file from disk and forgot so specify the right delimiter. When i read the csv file again specifying the right delimiter it had no effect since it was using the cached dataset. I am not sure if this is unwanted behavior since i can always specify `download_mode="force_redownload"`. But i think it would be n... | false |
732,376,648 | https://api.github.com/repos/huggingface/datasets/issues/777 | https://github.com/huggingface/datasets/pull/777 | 777 | Better error message for uninitialized metric | closed | 0 | 2020-10-29T14:42:50 | 2020-10-29T15:18:26 | 2020-10-29T15:18:24 | lhoestq | [] | When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message
Fix #729 | true |
732,343,550 | https://api.github.com/repos/huggingface/datasets/issues/776 | https://github.com/huggingface/datasets/pull/776 | 776 | Allow custom split names in text dataset | closed | 1 | 2020-10-29T14:04:06 | 2020-10-30T13:46:45 | 2020-10-30T13:23:52 | lhoestq | [] | The `text` dataset used to return only splits like train, test and validation. Other splits were ignored.
Now any split name is allowed.
I did the same for `json`, `pandas` and `csv`
Fix #735 | true |
732,287,504 | https://api.github.com/repos/huggingface/datasets/issues/775 | https://github.com/huggingface/datasets/pull/775 | 775 | Properly delete metrics when a process is killed | closed | 0 | 2020-10-29T12:52:07 | 2020-10-29T14:01:20 | 2020-10-29T14:01:19 | lhoestq | [] | Tests are flaky when using metrics in distributed setup.
There is because of one test that make sure that using two possibly incompatible metric computation (same exp id) either works or raises the right error.
However if the error is raised, all the processes of the metric are killed, and the open files (arrow + loc... | true |
732,265,741 | https://api.github.com/repos/huggingface/datasets/issues/774 | https://github.com/huggingface/datasets/pull/774 | 774 | [ROUGE] Add description to Rouge metric | closed | 0 | 2020-10-29T12:19:32 | 2020-10-29T17:55:50 | 2020-10-29T17:55:48 | patrickvonplaten | [] | Add information about case sensitivity to ROUGE. | true |
731,684,153 | https://api.github.com/repos/huggingface/datasets/issues/773 | https://github.com/huggingface/datasets/issues/773 | 773 | Adding CC-100: Monolingual Datasets from Web Crawl Data | closed | 4 | 2020-10-28T18:20:41 | 2022-01-26T13:22:54 | 2020-12-14T10:20:07 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** CC-100: Monolingual Datasets from Web Crawl Data
- **Description:** https://twitter.com/alex_conneau/status/1321507120848625665
- **Paper:** https://arxiv.org/abs/1911.02116
- **Data:** http://data.statmt.org/cc-100/
- **Motivation:** A large scale multi-lingual language modeling da... | false |
731,612,430 | https://api.github.com/repos/huggingface/datasets/issues/772 | https://github.com/huggingface/datasets/pull/772 | 772 | Fix metric with cache dir | closed | 0 | 2020-10-28T16:43:13 | 2020-10-29T09:34:44 | 2020-10-29T09:34:43 | lhoestq | [] | The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.
The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).
I remove the double concatenation and I fixed the tests
Fix #728 | true |
731,482,213 | https://api.github.com/repos/huggingface/datasets/issues/771 | https://github.com/huggingface/datasets/issues/771 | 771 | Using `Dataset.map` with `n_proc>1` print multiple progress bars | closed | 3 | 2020-10-28T14:13:27 | 2023-02-13T20:16:39 | 2023-02-13T20:16:39 | sgugger | [] | When using `Dataset.map` with `n_proc > 1`, only one of the processes should print a progress bar (to make the output readable). Right now, `n_proc` progress bars are printed. | false |
731,445,222 | https://api.github.com/repos/huggingface/datasets/issues/770 | https://github.com/huggingface/datasets/pull/770 | 770 | Fix custom builder caching | closed | 0 | 2020-10-28T13:32:24 | 2020-10-29T09:36:03 | 2020-10-29T09:36:01 | lhoestq | [] | The cache directory of a dataset didn't take into account additional parameters that the user could specify such as `features` or any parameter of the builder configuration kwargs (ex: `encoding` for the `text` dataset).
To fix that, the cache directory name now has a suffix that depends on all of them.
Fix #730
... | true |
731,257,104 | https://api.github.com/repos/huggingface/datasets/issues/769 | https://github.com/huggingface/datasets/issues/769 | 769 | How to choose proper download_mode in function load_dataset? | closed | 5 | 2020-10-28T09:16:19 | 2022-02-22T12:22:52 | 2022-02-22T12:22:52 | jzq2000 | [] | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so hones... | false |
730,908,060 | https://api.github.com/repos/huggingface/datasets/issues/768 | https://github.com/huggingface/datasets/issues/768 | 768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | open | 1 | 2020-10-27T22:33:03 | 2020-10-28T08:58:13 | null | sgugger | [
"enhancement"
] | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like dat... | false |
730,771,610 | https://api.github.com/repos/huggingface/datasets/issues/767 | https://github.com/huggingface/datasets/issues/767 | 767 | Add option for named splits when using ds.train_test_split | open | 1 | 2020-10-27T19:59:44 | 2020-11-10T14:05:21 | null | nateraw | [
"enhancement"
] | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `tra... | false |
730,669,596 | https://api.github.com/repos/huggingface/datasets/issues/766 | https://github.com/huggingface/datasets/issues/766 | 766 | [GEM] add DART data-to-text generation dataset | closed | 2 | 2020-10-27T17:34:04 | 2020-12-03T13:37:18 | 2020-12-03T13:37:18 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **P... | false |
730,668,332 | https://api.github.com/repos/huggingface/datasets/issues/765 | https://github.com/huggingface/datasets/issues/765 | 765 | [GEM] Add DART data-to-text generation dataset | closed | 0 | 2020-10-27T17:32:23 | 2020-10-27T17:34:21 | 2020-10-27T17:34:21 | yjernite | [
"dataset request"
] | ## Adding a Dataset
- **Name:** DART
- **Description:** DART consists of 82,191 examples across different domains with each input being a semantic RDF triple set derived from data records in tables and the tree ontology of the schema, annotated with sentence descriptions that cover all facts in the triple set.
- **P... | false |
730,617,828 | https://api.github.com/repos/huggingface/datasets/issues/764 | https://github.com/huggingface/datasets/pull/764 | 764 | Adding Issue Template for Dataset Requests | closed | 0 | 2020-10-27T16:37:08 | 2020-10-27T17:25:26 | 2020-10-27T17:25:25 | yjernite | [] | adding .github/ISSUE_TEMPLATE/add-dataset.md | true |
730,593,631 | https://api.github.com/repos/huggingface/datasets/issues/763 | https://github.com/huggingface/datasets/pull/763 | 763 | Fixed errors in bertscore related to custom baseline | closed | 0 | 2020-10-27T16:08:35 | 2020-10-28T17:59:25 | 2020-10-28T17:59:25 | juanjucm | [] | [bertscore version 0.3.6 ](https://github.com/Tiiiger/bert_score) added support for custom baseline files. This update added extra argument `baseline_path` to BERTScorer class as well as some extra boolean parameters `use_custom_baseline` in functions like `get_hash(model, num_layers, idf, rescale_with_baseline, use_cu... | true |
730,586,972 | https://api.github.com/repos/huggingface/datasets/issues/762 | https://github.com/huggingface/datasets/issues/762 | 762 | [GEM] Add Czech Restaurant data-to-text generation dataset | closed | 0 | 2020-10-27T16:00:47 | 2020-12-03T13:37:44 | 2020-12-03T13:37:44 | yjernite | [
"dataset request"
] | - Paper: https://www.aclweb.org/anthology/W19-8670.pdf
- Data: https://github.com/UFAL-DSG/cs_restaurant_dataset
- The dataset will likely be part of the GEM benchmark | false |
729,898,867 | https://api.github.com/repos/huggingface/datasets/issues/761 | https://github.com/huggingface/datasets/issues/761 | 761 | Downloaded datasets are not usable offline | closed | 2 | 2020-10-26T20:54:46 | 2022-02-15T10:32:28 | 2022-02-15T10:32:28 | ghazi-f | [] | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | false |
729,637,917 | https://api.github.com/repos/huggingface/datasets/issues/760 | https://github.com/huggingface/datasets/issues/760 | 760 | Add meta-data to the HANS dataset | closed | 0 | 2020-10-26T14:56:53 | 2020-12-03T13:38:34 | 2020-12-03T13:38:34 | yjernite | [
"good first issue",
"dataset bug"
] | The current version of the [HANS dataset](https://github.com/huggingface/datasets/blob/master/datasets/hans/hans.py) is missing the additional information provided for each example, including the sentence parses, heuristic and subcase. | false |
729,046,916 | https://api.github.com/repos/huggingface/datasets/issues/759 | https://github.com/huggingface/datasets/issues/759 | 759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | closed | 19 | 2020-10-25T15:34:57 | 2023-09-13T23:56:51 | 2021-08-04T18:10:09 | AI678 | [] | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | false |
728,638,559 | https://api.github.com/repos/huggingface/datasets/issues/758 | https://github.com/huggingface/datasets/issues/758 | 758 | Process 0 very slow when using num_procs with map to tokenizer | closed | 6 | 2020-10-24T02:40:20 | 2020-10-28T03:59:46 | 2020-10-28T03:59:45 | ksjae | [] | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | false |
728,241,494 | https://api.github.com/repos/huggingface/datasets/issues/757 | https://github.com/huggingface/datasets/issues/757 | 757 | CUDA out of memory | closed | 8 | 2020-10-23T13:57:00 | 2020-12-23T14:06:29 | 2020-12-23T14:06:29 | li1117heex | [] | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.