id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
851,229,399 | https://api.github.com/repos/huggingface/datasets/issues/2172 | https://github.com/huggingface/datasets/pull/2172 | 2,172 | Pin fsspec lower than 0.9.0 | closed | 0 | 2021-04-06T09:19:09 | 2021-04-06T09:49:27 | 2021-04-06T09:49:26 | lhoestq | [] | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | true |
851,090,662 | https://api.github.com/repos/huggingface/datasets/issues/2171 | https://github.com/huggingface/datasets/pull/2171 | 2,171 | Fixed the link to wikiauto training data. | closed | 3 | 2021-04-06T07:13:11 | 2021-04-06T16:05:42 | 2021-04-06T16:05:09 | mounicam | [] | true | |
850,913,228 | https://api.github.com/repos/huggingface/datasets/issues/2170 | https://github.com/huggingface/datasets/issues/2170 | 2,170 | Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | open | 1 | 2021-04-06T03:13:18 | 2021-06-16T01:10:50 | null | leezu | [] | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ ... | false |
850,456,180 | https://api.github.com/repos/huggingface/datasets/issues/2169 | https://github.com/huggingface/datasets/pull/2169 | 2,169 | Updated WER metric implementation to avoid memory issues | closed | 1 | 2021-04-05T15:43:20 | 2021-04-06T15:02:58 | 2021-04-06T15:02:58 | diego-fustes | [] | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| true |
849,957,941 | https://api.github.com/repos/huggingface/datasets/issues/2168 | https://github.com/huggingface/datasets/pull/2168 | 2,168 | Preserve split type when realoding dataset | closed | 5 | 2021-04-04T20:46:21 | 2021-04-19T10:57:05 | 2021-04-19T09:08:55 | mariosasko | [] | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arr... | true |
849,944,891 | https://api.github.com/repos/huggingface/datasets/issues/2167 | https://github.com/huggingface/datasets/issues/2167 | 2,167 | Split type not preserved when reloading the dataset | closed | 0 | 2021-04-04T19:29:54 | 2021-04-19T09:08:55 | 2021-04-19T09:08:55 | mariosasko | [] | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<cla... | false |
849,778,545 | https://api.github.com/repos/huggingface/datasets/issues/2166 | https://github.com/huggingface/datasets/issues/2166 | 2,166 | Regarding Test Sets for the GEM datasets | closed | 2 | 2021-04-04T02:02:45 | 2021-04-06T08:13:12 | 2021-04-06T08:13:12 | vyraun | [
"Dataset discussion"
] | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test... | false |
849,771,665 | https://api.github.com/repos/huggingface/datasets/issues/2165 | https://github.com/huggingface/datasets/issues/2165 | 2,165 | How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | closed | 7 | 2021-04-04T01:01:48 | 2021-08-24T15:55:35 | 2021-04-07T15:06:04 | y-rokutan | [] | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | false |
849,739,759 | https://api.github.com/repos/huggingface/datasets/issues/2164 | https://github.com/huggingface/datasets/pull/2164 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | closed | 0 | 2021-04-03T21:07:02 | 2021-04-06T14:41:09 | 2021-04-06T14:41:08 | mariosasko | [] | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | true |
849,669,366 | https://api.github.com/repos/huggingface/datasets/issues/2163 | https://github.com/huggingface/datasets/pull/2163 | 2,163 | Concat only unique fields in DatasetInfo.from_merge | closed | 3 | 2021-04-03T14:31:30 | 2021-04-06T14:40:00 | 2021-04-06T14:39:59 | mariosasko | [] | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | true |
849,129,201 | https://api.github.com/repos/huggingface/datasets/issues/2162 | https://github.com/huggingface/datasets/issues/2162 | 2,162 | visualization for cc100 is broken | closed | 3 | 2021-04-02T10:11:13 | 2022-10-05T13:20:24 | 2022-10-05T13:20:24 | dorost1234 | [
"nlp-viewer"
] | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| false |
849,127,041 | https://api.github.com/repos/huggingface/datasets/issues/2161 | https://github.com/huggingface/datasets/issues/2161 | 2,161 | any possibility to download part of large datasets only? | closed | 6 | 2021-04-02T10:06:46 | 2022-10-05T13:26:51 | 2022-10-05T13:26:51 | dorost1234 | [] | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | false |
849,052,921 | https://api.github.com/repos/huggingface/datasets/issues/2160 | https://github.com/huggingface/datasets/issues/2160 | 2,160 | data_args.preprocessing_num_workers almost freezes | closed | 2 | 2021-04-02T07:56:13 | 2021-04-02T10:14:32 | 2021-04-02T10:14:31 | dorost1234 | [] | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ... | false |
848,851,962 | https://api.github.com/repos/huggingface/datasets/issues/2159 | https://github.com/huggingface/datasets/issues/2159 | 2,159 | adding ccnet dataset | closed | 1 | 2021-04-01T23:28:36 | 2021-04-02T10:05:19 | 2021-04-02T10:05:19 | dorost1234 | [
"dataset request"
] | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite importan... | false |
848,506,746 | https://api.github.com/repos/huggingface/datasets/issues/2158 | https://github.com/huggingface/datasets/issues/2158 | 2,158 | viewer "fake_news_english" error | closed | 2 | 2021-04-01T14:13:20 | 2022-10-05T13:22:02 | 2022-10-05T13:22:02 | emanuelevivoli | [
"nlp-viewer"
] | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe... | false |
847,205,239 | https://api.github.com/repos/huggingface/datasets/issues/2157 | https://github.com/huggingface/datasets/pull/2157 | 2,157 | updated user permissions based on umask | closed | 0 | 2021-03-31T19:38:29 | 2021-04-06T07:19:19 | 2021-04-06T07:19:19 | bhavitvyamalik | [] | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | true |
847,198,295 | https://api.github.com/repos/huggingface/datasets/issues/2156 | https://github.com/huggingface/datasets/pull/2156 | 2,156 | User permissions | closed | 0 | 2021-03-31T19:33:48 | 2021-03-31T19:34:24 | 2021-03-31T19:34:24 | bhavitvyamalik | [] | Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | true |
846,786,897 | https://api.github.com/repos/huggingface/datasets/issues/2155 | https://github.com/huggingface/datasets/pull/2155 | 2,155 | Add table classes to the documentation | closed | 1 | 2021-03-31T14:36:10 | 2021-04-01T16:46:30 | 2021-03-31T15:42:08 | lhoestq | [] | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | true |
846,763,960 | https://api.github.com/repos/huggingface/datasets/issues/2154 | https://github.com/huggingface/datasets/pull/2154 | 2,154 | Adding the NorNE dataset for Norwegian POS and NER | closed | 1 | 2021-03-31T14:22:50 | 2021-04-01T09:27:00 | 2021-04-01T09:16:08 | versae | [] | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | true |
846,181,502 | https://api.github.com/repos/huggingface/datasets/issues/2153 | https://github.com/huggingface/datasets/issues/2153 | 2,153 | load_dataset ignoring features | closed | 3 | 2021-03-31T08:30:09 | 2022-10-05T13:29:12 | 2022-10-05T13:29:12 | GuillemGSubies | [
"bug"
] | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the C... | false |
845,751,273 | https://api.github.com/repos/huggingface/datasets/issues/2152 | https://github.com/huggingface/datasets/pull/2152 | 2,152 | Update README.md | closed | 0 | 2021-03-31T03:21:19 | 2021-04-01T10:20:37 | 2021-04-01T10:20:36 | JieyuZhao | [] | Updated some descriptions of Wino_Bias dataset. | true |
844,886,081 | https://api.github.com/repos/huggingface/datasets/issues/2151 | https://github.com/huggingface/datasets/pull/2151 | 2,151 | Add support for axis in concatenate datasets | closed | 5 | 2021-03-30T16:58:44 | 2021-06-23T17:41:02 | 2021-04-19T16:07:18 | albertvillanova | [
"enhancement"
] | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | true |
844,776,448 | https://api.github.com/repos/huggingface/datasets/issues/2150 | https://github.com/huggingface/datasets/pull/2150 | 2,150 | Allow pickling of big in-memory tables | closed | 0 | 2021-03-30T15:51:56 | 2021-03-31T10:37:15 | 2021-03-31T10:37:14 | lhoestq | [] | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | true |
844,734,076 | https://api.github.com/repos/huggingface/datasets/issues/2149 | https://github.com/huggingface/datasets/issues/2149 | 2,149 | Telugu subset missing for xtreme tatoeba dataset | closed | 2 | 2021-03-30T15:26:34 | 2022-10-05T13:28:30 | 2022-10-05T13:28:30 | cosmeowpawlitan | [] | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict ... | false |
844,700,910 | https://api.github.com/repos/huggingface/datasets/issues/2148 | https://github.com/huggingface/datasets/issues/2148 | 2,148 | Add configurable options to `seqeval` metric | closed | 1 | 2021-03-30T15:04:06 | 2021-04-15T13:49:46 | 2021-04-15T13:49:46 | marrodion | [] | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs... | false |
844,687,831 | https://api.github.com/repos/huggingface/datasets/issues/2147 | https://github.com/huggingface/datasets/pull/2147 | 2,147 | Render docstring return type as inline | closed | 0 | 2021-03-30T14:55:43 | 2021-03-31T13:11:05 | 2021-03-31T13:11:05 | albertvillanova | [
"documentation"
] | This documentation setting will avoid having the return type in a separate line under `Return type`.
See e.g. current docs for `Dataset.to_csv`. | true |
844,673,244 | https://api.github.com/repos/huggingface/datasets/issues/2146 | https://github.com/huggingface/datasets/issues/2146 | 2,146 | Dataset file size on disk is very large with 3D Array | open | 6 | 2021-03-30T14:46:09 | 2021-04-16T13:07:02 | null | jblemoine | [] | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | false |
844,603,518 | https://api.github.com/repos/huggingface/datasets/issues/2145 | https://github.com/huggingface/datasets/pull/2145 | 2,145 | Implement Dataset add_column | closed | 1 | 2021-03-30T14:02:14 | 2021-04-29T14:50:44 | 2021-04-29T14:50:43 | albertvillanova | [
"enhancement"
] | Implement `Dataset.add_column`.
Close #1954. | true |
844,352,067 | https://api.github.com/repos/huggingface/datasets/issues/2144 | https://github.com/huggingface/datasets/issues/2144 | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | open | 6 | 2021-03-30T10:38:31 | 2021-04-01T09:21:17 | null | TomPyonsuke | [] | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | false |
844,313,228 | https://api.github.com/repos/huggingface/datasets/issues/2143 | https://github.com/huggingface/datasets/pull/2143 | 2,143 | task casting via load_dataset | closed | 0 | 2021-03-30T10:00:42 | 2021-06-11T13:20:41 | 2021-06-11T13:20:36 | theo-m | [] | wip
not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet". | true |
843,919,420 | https://api.github.com/repos/huggingface/datasets/issues/2142 | https://github.com/huggingface/datasets/pull/2142 | 2,142 | Gem V1.1 | closed | 0 | 2021-03-29T23:47:02 | 2021-03-30T00:10:02 | 2021-03-30T00:10:02 | yjernite | [] | This branch updates the GEM benchmark to its 1.1 version which includes:
- challenge sets for most tasks
- detokenized TurkCorpus to match the rest of the text simplification subtasks
- fixed inputs for TurkCorpus and ASSET test sets
- 18 languages in WikiLingua
cc @sebastianGehrmann | true |
843,914,790 | https://api.github.com/repos/huggingface/datasets/issues/2141 | https://github.com/huggingface/datasets/pull/2141 | 2,141 | added spans field for the wikiann datasets | closed | 3 | 2021-03-29T23:38:26 | 2021-03-31T13:27:50 | 2021-03-31T13:27:50 | rabeehk | [] | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | true |
843,830,451 | https://api.github.com/repos/huggingface/datasets/issues/2140 | https://github.com/huggingface/datasets/pull/2140 | 2,140 | add banking77 dataset | closed | 1 | 2021-03-29T21:32:23 | 2021-04-09T09:32:18 | 2021-04-09T09:32:18 | dkajtoch | [] | Intent classification/detection dataset from banking category with 77 unique intents. | true |
843,662,613 | https://api.github.com/repos/huggingface/datasets/issues/2139 | https://github.com/huggingface/datasets/issues/2139 | 2,139 | TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | closed | 2 | 2021-03-29T18:23:54 | 2021-03-30T09:12:53 | 2021-03-30T09:12:53 | PedroMLF | [] | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | false |
843,508,402 | https://api.github.com/repos/huggingface/datasets/issues/2138 | https://github.com/huggingface/datasets/pull/2138 | 2,138 | Add CER metric | closed | 0 | 2021-03-29T15:52:27 | 2021-04-06T16:16:11 | 2021-04-06T07:14:38 | chutaklee | [] | Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.
```python
from cer import CER
cer = CER()
class TestCER(unittest.TestCase):
def test_cer_case_senstive(self)... | true |
843,502,835 | https://api.github.com/repos/huggingface/datasets/issues/2137 | https://github.com/huggingface/datasets/pull/2137 | 2,137 | Fix missing infos from concurrent dataset loading | closed | 0 | 2021-03-29T15:46:12 | 2021-03-31T10:35:56 | 2021-03-31T10:35:55 | lhoestq | [] | This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
| true |
843,492,015 | https://api.github.com/repos/huggingface/datasets/issues/2136 | https://github.com/huggingface/datasets/pull/2136 | 2,136 | fix dialogue action slot name and value | closed | 0 | 2021-03-29T15:34:13 | 2021-03-31T12:48:02 | 2021-03-31T12:48:01 | adamlin120 | [] | fix #2128 | true |
843,246,344 | https://api.github.com/repos/huggingface/datasets/issues/2135 | https://github.com/huggingface/datasets/issues/2135 | 2,135 | en language data from MLQA dataset is missing | closed | 3 | 2021-03-29T10:47:50 | 2021-03-30T10:20:23 | 2021-03-30T10:20:23 | rabeehk | [] | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | false |
843,242,849 | https://api.github.com/repos/huggingface/datasets/issues/2134 | https://github.com/huggingface/datasets/issues/2134 | 2,134 | Saving large in-memory datasets with save_to_disk crashes because of pickling | closed | 6 | 2021-03-29T10:43:15 | 2021-05-03T17:59:21 | 2021-05-03T17:59:21 | prokopCerny | [
"bug"
] | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | false |
843,149,680 | https://api.github.com/repos/huggingface/datasets/issues/2133 | https://github.com/huggingface/datasets/issues/2133 | 2,133 | bug in mlqa dataset | closed | 3 | 2021-03-29T09:03:09 | 2021-03-30T17:40:57 | 2021-03-30T17:40:57 | dorost1234 | [] | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | false |
843,142,822 | https://api.github.com/repos/huggingface/datasets/issues/2132 | https://github.com/huggingface/datasets/issues/2132 | 2,132 | TydiQA dataset is mixed and is not split per language | open | 3 | 2021-03-29T08:56:21 | 2021-04-04T09:57:15 | null | dorost1234 | [] | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | false |
843,133,112 | https://api.github.com/repos/huggingface/datasets/issues/2131 | https://github.com/huggingface/datasets/issues/2131 | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | closed | 3 | 2021-03-29T08:45:58 | 2021-04-10T11:08:55 | 2021-04-10T11:08:55 | andy-yangz | [
"bug"
] | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py"... | false |
843,111,936 | https://api.github.com/repos/huggingface/datasets/issues/2130 | https://github.com/huggingface/datasets/issues/2130 | 2,130 | wikiann dataset is missing columns | closed | 5 | 2021-03-29T08:23:00 | 2021-08-27T14:44:18 | 2021-08-27T14:44:18 | dorost1234 | [
"good first issue"
] | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | false |
843,033,656 | https://api.github.com/repos/huggingface/datasets/issues/2129 | https://github.com/huggingface/datasets/issues/2129 | 2,129 | How to train BERT model with next sentence prediction? | closed | 4 | 2021-03-29T06:48:03 | 2021-04-01T04:58:40 | 2021-04-01T04:58:40 | jnishi | [] | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| false |
843,023,910 | https://api.github.com/repos/huggingface/datasets/issues/2128 | https://github.com/huggingface/datasets/issues/2128 | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | closed | 1 | 2021-03-29T06:34:02 | 2021-03-31T12:48:01 | 2021-03-31T12:48:01 | adamlin120 | [
"dataset bug"
] | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p... | false |
843,017,199 | https://api.github.com/repos/huggingface/datasets/issues/2127 | https://github.com/huggingface/datasets/pull/2127 | 2,127 | make documentation more clear to use different cloud storage | closed | 0 | 2021-03-29T06:24:06 | 2021-03-29T12:16:24 | 2021-03-29T12:16:24 | philschmid | [] | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | true |
842,779,966 | https://api.github.com/repos/huggingface/datasets/issues/2126 | https://github.com/huggingface/datasets/pull/2126 | 2,126 | Replace legacy torch.Tensor constructor with torch.tensor | closed | 0 | 2021-03-28T16:57:30 | 2021-03-29T09:27:14 | 2021-03-29T09:27:13 | mariosasko | [] | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | true |
842,690,570 | https://api.github.com/repos/huggingface/datasets/issues/2125 | https://github.com/huggingface/datasets/issues/2125 | 2,125 | Is dataset timit_asr broken? | closed | 2 | 2021-03-28T08:30:18 | 2021-03-28T12:29:25 | 2021-03-28T12:29:25 | kosuke-kitahara | [] | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_example... | false |
842,627,729 | https://api.github.com/repos/huggingface/datasets/issues/2124 | https://github.com/huggingface/datasets/issues/2124 | 2,124 | Adding ScaNN library to do MIPS? | open | 1 | 2021-03-28T00:07:00 | 2021-03-29T13:23:43 | null | shamanez | [] | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann

d... | false |
842,194,588 | https://api.github.com/repos/huggingface/datasets/issues/2122 | https://github.com/huggingface/datasets/pull/2122 | 2,122 | Fast table queries with interpolation search | closed | 0 | 2021-03-26T18:09:20 | 2021-08-04T18:11:59 | 2021-04-06T14:33:01 | lhoestq | [] | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default ch... | true |
842,148,633 | https://api.github.com/repos/huggingface/datasets/issues/2121 | https://github.com/huggingface/datasets/pull/2121 | 2,121 | Add Validation For README | closed | 7 | 2021-03-26T17:02:17 | 2021-05-10T13:17:18 | 2021-05-10T09:41:41 | gchhablani | [] | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
... | true |
841,954,521 | https://api.github.com/repos/huggingface/datasets/issues/2120 | https://github.com/huggingface/datasets/issues/2120 | 2,120 | dataset viewer does not work anymore | closed | 2 | 2021-03-26T13:22:13 | 2021-03-26T15:52:22 | 2021-03-26T15:52:22 | dorost1234 | [
"nlp-viewer"
] | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | false |
841,567,199 | https://api.github.com/repos/huggingface/datasets/issues/2119 | https://github.com/huggingface/datasets/pull/2119 | 2,119 | copy.deepcopy os.environ instead of copy | closed | 0 | 2021-03-26T03:58:38 | 2021-03-26T15:13:52 | 2021-03-26T15:13:52 | NihalHarish | [] | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, lik... | true |
841,563,329 | https://api.github.com/repos/huggingface/datasets/issues/2118 | https://github.com/huggingface/datasets/pull/2118 | 2,118 | Remove os.environ.copy in Dataset.map | closed | 1 | 2021-03-26T03:48:17 | 2021-03-26T12:03:23 | 2021-03-26T12:00:05 | mariosasko | [] | Replace `os.environ.copy` with in-place modification
Fixes #2115 | true |
841,535,283 | https://api.github.com/repos/huggingface/datasets/issues/2117 | https://github.com/huggingface/datasets/issues/2117 | 2,117 | load_metric from local "glue.py" meet error 'NoneType' object is not callable | closed | 3 | 2021-03-26T02:35:22 | 2021-08-25T21:44:05 | 2021-03-26T02:40:26 | Frankie123421 | [] | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent... | false |
841,481,292 | https://api.github.com/repos/huggingface/datasets/issues/2116 | https://github.com/huggingface/datasets/issues/2116 | 2,116 | Creating custom dataset results in error while calling the map() function | closed | 1 | 2021-03-26T00:37:46 | 2021-03-31T14:30:32 | 2021-03-31T14:30:32 | GeetDsa | [] | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the ... | false |
841,283,974 | https://api.github.com/repos/huggingface/datasets/issues/2115 | https://github.com/huggingface/datasets/issues/2115 | 2,115 | The datasets.map() implementation modifies the datatype of os.environ object | closed | 0 | 2021-03-25T20:29:19 | 2021-03-26T15:13:52 | 2021-03-26T15:13:52 | leleamol | [] | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes... | false |
841,207,878 | https://api.github.com/repos/huggingface/datasets/issues/2114 | https://github.com/huggingface/datasets/pull/2114 | 2,114 | Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | closed | 2 | 2021-03-25T18:40:17 | 2021-03-31T10:38:50 | 2021-03-31T10:38:50 | iliaschalkidis | [] | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | true |
841,191,303 | https://api.github.com/repos/huggingface/datasets/issues/2113 | https://github.com/huggingface/datasets/pull/2113 | 2,113 | Implement Dataset as context manager | closed | 0 | 2021-03-25T18:18:30 | 2021-03-31T11:30:14 | 2021-03-31T08:30:11 | albertvillanova | [] | When used as context manager, it would be safely deleted if some exception is raised.
This will avoid
> During handling of the above exception, another exception occurred: | true |
841,098,008 | https://api.github.com/repos/huggingface/datasets/issues/2112 | https://github.com/huggingface/datasets/pull/2112 | 2,112 | Support for legal NLP datasets (EURLEX and ECtHR cases) | closed | 0 | 2021-03-25T16:24:17 | 2021-03-25T18:39:31 | 2021-03-25T18:34:31 | iliaschalkidis | [] | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084) | true |
841,082,087 | https://api.github.com/repos/huggingface/datasets/issues/2111 | https://github.com/huggingface/datasets/pull/2111 | 2,111 | Compute WER metric iteratively | closed | 7 | 2021-03-25T16:06:48 | 2021-04-06T07:20:43 | 2021-04-06T07:20:43 | albertvillanova | [] | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | true |
840,794,995 | https://api.github.com/repos/huggingface/datasets/issues/2110 | https://github.com/huggingface/datasets/pull/2110 | 2,110 | Fix incorrect assertion in builder.py | closed | 2 | 2021-03-25T10:39:20 | 2021-04-12T13:33:03 | 2021-04-12T13:33:03 | dreamgonfly | [] | Fix incorrect num_examples comparison assertion in builder.py | true |
840,746,598 | https://api.github.com/repos/huggingface/datasets/issues/2109 | https://github.com/huggingface/datasets/pull/2109 | 2,109 | Add more issue templates and customize issue template chooser | closed | 2 | 2021-03-25T09:41:53 | 2021-04-19T06:20:11 | 2021-04-19T06:20:11 | albertvillanova | [] | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` templa... | true |
840,181,055 | https://api.github.com/repos/huggingface/datasets/issues/2108 | https://github.com/huggingface/datasets/issues/2108 | 2,108 | Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | open | 0 | 2021-03-24T21:32:16 | 2021-03-25T06:31:43 | null | shamanez | [
"question"
] | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6... | false |
839,495,825 | https://api.github.com/repos/huggingface/datasets/issues/2107 | https://github.com/huggingface/datasets/pull/2107 | 2,107 | Metadata validation | closed | 5 | 2021-03-24T08:52:41 | 2021-04-26T08:27:14 | 2021-04-26T08:27:13 | theo-m | [] | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365... | true |
839,084,264 | https://api.github.com/repos/huggingface/datasets/issues/2106 | https://github.com/huggingface/datasets/issues/2106 | 2,106 | WMT19 Dataset for Kazakh-English is not formatted correctly | open | 1 | 2021-03-23T20:14:47 | 2021-03-25T21:36:20 | null | trina731 | [
"dataset bug"
] | In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> ... | false |
839,059,226 | https://api.github.com/repos/huggingface/datasets/issues/2105 | https://github.com/huggingface/datasets/issues/2105 | 2,105 | Request to remove S2ORC dataset | open | 3 | 2021-03-23T19:43:06 | 2021-08-04T19:18:02 | null | kyleclo | [] | Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks! | false |
839,027,834 | https://api.github.com/repos/huggingface/datasets/issues/2104 | https://github.com/huggingface/datasets/issues/2104 | 2,104 | Trouble loading wiki_movies | closed | 2 | 2021-03-23T18:59:54 | 2022-03-30T08:22:58 | 2022-03-30T08:22:58 | adityaarunsinghal | [] | Hello,
I am trying to load_dataset("wiki_movies") and it gives me this error -
`FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa... | false |
838,946,916 | https://api.github.com/repos/huggingface/datasets/issues/2103 | https://github.com/huggingface/datasets/issues/2103 | 2,103 | citation, homepage, and license fields of `dataset_info.json` are duplicated many times | closed | 1 | 2021-03-23T17:18:09 | 2021-04-06T14:39:59 | 2021-04-06T14:39:59 | samsontmr | [
"enhancement",
"good first issue"
] | This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.
Example result:
```
"citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {... | false |
838,794,090 | https://api.github.com/repos/huggingface/datasets/issues/2102 | https://github.com/huggingface/datasets/pull/2102 | 2,102 | Move Dataset.to_csv to csv module | closed | 0 | 2021-03-23T14:35:46 | 2021-03-24T14:07:35 | 2021-03-24T14:07:34 | albertvillanova | [
"refactoring"
] | Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`. | true |
838,586,184 | https://api.github.com/repos/huggingface/datasets/issues/2101 | https://github.com/huggingface/datasets/pull/2101 | 2,101 | MIAM dataset - new citation details | closed | 2 | 2021-03-23T10:41:23 | 2021-03-23T18:08:10 | 2021-03-23T18:08:10 | eusip | [] | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | true |
838,574,631 | https://api.github.com/repos/huggingface/datasets/issues/2100 | https://github.com/huggingface/datasets/pull/2100 | 2,100 | Fix deprecated warning message and docstring | closed | 3 | 2021-03-23T10:27:52 | 2021-03-24T08:19:41 | 2021-03-23T18:03:49 | albertvillanova | [
"documentation"
] | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | true |
838,523,819 | https://api.github.com/repos/huggingface/datasets/issues/2099 | https://github.com/huggingface/datasets/issues/2099 | 2,099 | load_from_disk takes a long time to load local dataset | closed | 8 | 2021-03-23T09:28:37 | 2021-03-23T17:12:16 | 2021-03-23T17:12:16 | samsontmr | [] | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin... | false |
838,447,959 | https://api.github.com/repos/huggingface/datasets/issues/2098 | https://github.com/huggingface/datasets/issues/2098 | 2,098 | SQuAD version | closed | 2 | 2021-03-23T07:47:54 | 2021-03-26T09:48:54 | 2021-03-26T09:48:54 | h-peng17 | [] | Hi~
I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it. | false |
838,105,289 | https://api.github.com/repos/huggingface/datasets/issues/2097 | https://github.com/huggingface/datasets/pull/2097 | 2,097 | fixes issue #1110 by descending further if `obj["_type"]` is a dict | closed | 0 | 2021-03-22T21:00:55 | 2021-03-22T21:01:11 | 2021-03-22T21:01:11 | dcfidalgo | [] | Check metrics | true |
838,038,379 | https://api.github.com/repos/huggingface/datasets/issues/2096 | https://github.com/huggingface/datasets/issues/2096 | 2,096 | CoNLL 2003 dataset not including German | closed | 2 | 2021-03-22T19:23:56 | 2023-07-25T16:49:07 | 2023-07-25T16:49:07 | rxian | [
"dataset request"
] | Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it ... | false |
837,209,211 | https://api.github.com/repos/huggingface/datasets/issues/2093 | https://github.com/huggingface/datasets/pull/2093 | 2,093 | Fix: Allows a feature to be named "_type" | closed | 4 | 2021-03-21T23:21:57 | 2021-03-25T14:35:54 | 2021-03-25T14:35:54 | dcfidalgo | [] | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | true |
836,984,043 | https://api.github.com/repos/huggingface/datasets/issues/2092 | https://github.com/huggingface/datasets/issues/2092 | 2,092 | How to disable making arrow tables in load_dataset ? | closed | 7 | 2021-03-21T04:50:07 | 2022-06-01T16:49:52 | 2022-06-01T16:49:52 | Jeevesh8 | [] | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | false |
836,831,403 | https://api.github.com/repos/huggingface/datasets/issues/2091 | https://github.com/huggingface/datasets/pull/2091 | 2,091 | Fix copy snippet in docs | closed | 0 | 2021-03-20T15:08:22 | 2021-03-24T08:20:50 | 2021-03-23T17:18:31 | mariosasko | [
"documentation"
] | With this change the lines starting with `...` in the code blocks can be properly copied to clipboard. | true |
836,807,498 | https://api.github.com/repos/huggingface/datasets/issues/2090 | https://github.com/huggingface/datasets/pull/2090 | 2,090 | Add machine translated multilingual STS benchmark dataset | closed | 6 | 2021-03-20T13:28:07 | 2021-03-29T13:24:42 | 2021-03-29T13:00:15 | PhilipMay | [] | also see here https://github.com/PhilipMay/stsb-multi-mt | true |
836,788,019 | https://api.github.com/repos/huggingface/datasets/issues/2089 | https://github.com/huggingface/datasets/issues/2089 | 2,089 | Add documentaton for dataset README.md files | closed | 8 | 2021-03-20T11:44:38 | 2023-07-25T16:45:38 | 2023-07-25T16:45:37 | PhilipMay | [] | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which valu... | false |
836,763,733 | https://api.github.com/repos/huggingface/datasets/issues/2088 | https://github.com/huggingface/datasets/pull/2088 | 2,088 | change bibtex template to author instead of authors | closed | 1 | 2021-03-20T09:23:44 | 2021-03-23T15:40:12 | 2021-03-23T15:40:12 | PhilipMay | [] | Hi,
IMO when using BibTex Author should be used instead of Authors.
See here: http://www.bibtex.org/Using/de/
Thanks
Philip | true |
836,587,392 | https://api.github.com/repos/huggingface/datasets/issues/2087 | https://github.com/huggingface/datasets/pull/2087 | 2,087 | Update metadata if dataset features are modified | closed | 4 | 2021-03-20T02:05:23 | 2021-04-09T09:25:33 | 2021-04-09T09:25:33 | mariosasko | [] | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| true |
836,249,587 | https://api.github.com/repos/huggingface/datasets/issues/2086 | https://github.com/huggingface/datasets/pull/2086 | 2,086 | change user permissions to -rw-r--r-- | closed | 1 | 2021-03-19T18:14:56 | 2021-03-24T13:59:04 | 2021-03-24T13:59:04 | bhavitvyamalik | [] | Fix for #2065 | true |
835,870,994 | https://api.github.com/repos/huggingface/datasets/issues/2085 | https://github.com/huggingface/datasets/pull/2085 | 2,085 | Fix max_wait_time in requests | closed | 0 | 2021-03-19T11:22:26 | 2021-03-23T15:36:38 | 2021-03-23T15:36:37 | lhoestq | [] | it was handled as a min time, not max cc @SBrandeis | true |
835,750,671 | https://api.github.com/repos/huggingface/datasets/issues/2084 | https://github.com/huggingface/datasets/issues/2084 | 2,084 | CUAD - Contract Understanding Atticus Dataset | closed | 1 | 2021-03-19T09:27:43 | 2021-04-16T08:50:44 | 2021-04-16T08:50:44 | theo-m | [
"dataset request"
] | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** http... | false |
835,695,425 | https://api.github.com/repos/huggingface/datasets/issues/2083 | https://github.com/huggingface/datasets/issues/2083 | 2,083 | `concatenate_datasets` throws error when changing the order of datasets to concatenate | closed | 1 | 2021-03-19T08:29:48 | 2021-04-09T09:25:33 | 2021-04-09T09:25:33 | patrickvonplaten | [] | Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it shou... | false |
835,401,555 | https://api.github.com/repos/huggingface/datasets/issues/2082 | https://github.com/huggingface/datasets/pull/2082 | 2,082 | Updated card using information from data statement and datasheet | closed | 0 | 2021-03-19T00:39:38 | 2021-03-19T14:29:09 | 2021-03-19T14:29:09 | mcmillanmajora | [] | I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated... | true |
835,112,968 | https://api.github.com/repos/huggingface/datasets/issues/2081 | https://github.com/huggingface/datasets/pull/2081 | 2,081 | Fix docstrings issues | closed | 0 | 2021-03-18T18:11:01 | 2021-04-07T14:37:43 | 2021-04-07T14:37:43 | albertvillanova | [
"documentation"
] | Fix docstring issues. | true |
835,023,000 | https://api.github.com/repos/huggingface/datasets/issues/2080 | https://github.com/huggingface/datasets/issues/2080 | 2,080 | Multidimensional arrays in a Dataset | closed | 2 | 2021-03-18T16:29:14 | 2021-03-25T12:46:53 | 2021-03-25T12:46:53 | vermouthmjl | [] | Hi,
I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.
... | false |
834,920,493 | https://api.github.com/repos/huggingface/datasets/issues/2079 | https://github.com/huggingface/datasets/pull/2079 | 2,079 | Refactorize Metric.compute signature to force keyword arguments only | closed | 0 | 2021-03-18T15:05:50 | 2021-03-23T15:31:44 | 2021-03-23T15:31:44 | albertvillanova | [] | Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax. | true |
834,694,819 | https://api.github.com/repos/huggingface/datasets/issues/2078 | https://github.com/huggingface/datasets/issues/2078 | 2,078 | MemoryError when computing WER metric | closed | 11 | 2021-03-18T11:30:05 | 2021-05-01T08:31:49 | 2021-04-06T07:20:43 | diego-fustes | [
"metric bug"
] | Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File ... | false |
834,649,536 | https://api.github.com/repos/huggingface/datasets/issues/2077 | https://github.com/huggingface/datasets/pull/2077 | 2,077 | Bump huggingface_hub version | closed | 1 | 2021-03-18T10:54:34 | 2021-03-18T11:33:26 | 2021-03-18T11:33:26 | SBrandeis | [] | `0.0.2 => 0.0.6` | true |
834,445,296 | https://api.github.com/repos/huggingface/datasets/issues/2076 | https://github.com/huggingface/datasets/issues/2076 | 2,076 | Issue: Dataset download error | open | 7 | 2021-03-18T06:36:06 | 2021-03-22T11:52:31 | null | XuhuiZhou | [
"dataset bug"
] | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | false |
834,301,246 | https://api.github.com/repos/huggingface/datasets/issues/2075 | https://github.com/huggingface/datasets/issues/2075 | 2,075 | ConnectionError: Couldn't reach common_voice.py | closed | 2 | 2021-03-18T01:19:06 | 2021-03-20T10:29:41 | 2021-03-20T10:29:41 | LifaSun | [] | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/ma... | false |
834,268,463 | https://api.github.com/repos/huggingface/datasets/issues/2074 | https://github.com/huggingface/datasets/pull/2074 | 2,074 | Fix size categories in YAML Tags | closed | 9 | 2021-03-18T00:02:36 | 2021-03-23T17:11:10 | 2021-03-23T17:11:10 | gchhablani | [] | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for datas... | true |
834,192,501 | https://api.github.com/repos/huggingface/datasets/issues/2073 | https://github.com/huggingface/datasets/pull/2073 | 2,073 | Fixes check of TF_AVAILABLE and TORCH_AVAILABLE | closed | 0 | 2021-03-17T21:28:53 | 2021-03-18T09:09:25 | 2021-03-18T09:09:24 | philschmid | [] | # What is this PR doing
This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 | true |
834,054,837 | https://api.github.com/repos/huggingface/datasets/issues/2072 | https://github.com/huggingface/datasets/pull/2072 | 2,072 | Fix docstring issues | closed | 2 | 2021-03-17T18:13:44 | 2021-03-24T08:20:57 | 2021-03-18T12:41:21 | albertvillanova | [
"documentation"
] | Fix docstring issues. | true |
833,950,824 | https://api.github.com/repos/huggingface/datasets/issues/2071 | https://github.com/huggingface/datasets/issues/2071 | 2,071 | Multiprocessing is slower than single process | closed | 1 | 2021-03-17T16:08:58 | 2021-03-18T09:10:23 | 2021-03-18T09:10:23 | theo-m | [
"bug"
] | ```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
... | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.