id int64 599M 3.29B | url stringlengths 58 61 | html_url stringlengths 46 51 | number int64 1 7.72k | title stringlengths 1 290 | state stringclasses 2
values | comments int64 0 70 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-08-05 09:28:51 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-08-05 11:39:56 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-08-01 05:15:45 ⌀ | user_login stringlengths 3 26 | labels listlengths 0 4 | body stringlengths 0 228k ⌀ | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
895,610,216 | https://api.github.com/repos/huggingface/datasets/issues/2382 | https://github.com/huggingface/datasets/issues/2382 | 2,382 | DuplicatedKeysError: FAILURE TO GENERATE DATASET ! load_dataset('head_qa', 'en') | closed | 0 | 2021-05-19T15:49:48 | 2021-05-30T13:26:16 | 2021-05-30T13:26:16 | helloworld123-lab | [] | Hello everyone,
I try to use head_qa dataset in [https://huggingface.co/datasets/viewer/?dataset=head_qa&config=en](url)
```
!pip install datasets
from datasets import load_dataset
dataset = load_dataset(
'head_qa', 'en')
```
When I write above load_dataset(.), it throws the following:
```
Duplicated... | false |
895,588,844 | https://api.github.com/repos/huggingface/datasets/issues/2381 | https://github.com/huggingface/datasets/pull/2381 | 2,381 | add dataset card title | closed | 0 | 2021-05-19T15:30:03 | 2021-05-20T18:51:40 | 2021-05-20T18:51:40 | bhavitvyamalik | [] | few of them were missed by me earlier which I've added now | true |
895,367,201 | https://api.github.com/repos/huggingface/datasets/issues/2380 | https://github.com/huggingface/datasets/pull/2380 | 2,380 | maintain YAML structure reading from README | closed | 0 | 2021-05-19T12:12:07 | 2021-05-19T13:08:38 | 2021-05-19T13:08:38 | bhavitvyamalik | [] | How YAML used be loaded earlier in the string (structure of YAML was affected because of this and YAML for datasets with multiple configs was not being loaded correctly):
```
annotations_creators:
labeled_final:
- expert-generated
labeled_swap:
- expert-generated
unlabeled_final:
- machine-generated
language_c... | true |
895,252,597 | https://api.github.com/repos/huggingface/datasets/issues/2379 | https://github.com/huggingface/datasets/pull/2379 | 2,379 | Disallow duplicate keys in yaml tags | closed | 0 | 2021-05-19T10:10:07 | 2021-05-19T10:45:32 | 2021-05-19T10:45:31 | lhoestq | [] | Make sure that there's no duplidate keys in yaml tags.
I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure.
cc @julien-c | true |
895,131,774 | https://api.github.com/repos/huggingface/datasets/issues/2378 | https://github.com/huggingface/datasets/issues/2378 | 2,378 | Add missing dataset_infos.json files | open | 0 | 2021-05-19T08:11:12 | 2021-05-19T08:11:12 | null | lewtun | [
"enhancement"
] | Some of the datasets in `datasets` are missing a `dataset_infos.json` file, e.g.
```
[PosixPath('datasets/chr_en/chr_en.py'), PosixPath('datasets/chr_en/README.md')]
[PosixPath('datasets/telugu_books/README.md'), PosixPath('datasets/telugu_books/telugu_books.py')]
[PosixPath('datasets/reclor/README.md'), PosixPat... | false |
894,918,927 | https://api.github.com/repos/huggingface/datasets/issues/2377 | https://github.com/huggingface/datasets/issues/2377 | 2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | open | 4 | 2021-05-19T02:04:37 | 2024-01-18T08:06:15 | null | Ark-kun | [
"bug"
] | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arro... | false |
894,852,264 | https://api.github.com/repos/huggingface/datasets/issues/2376 | https://github.com/huggingface/datasets/pull/2376 | 2,376 | Improve task api code quality | closed | 2 | 2021-05-18T23:13:40 | 2021-06-02T20:39:57 | 2021-05-25T15:30:54 | mariosasko | [] | Improves the code quality of the `TaskTemplate` dataclasses.
Changes:
* replaces `return NotImplemented` with raise `NotImplementedError`
* replaces `sorted` with `len` in the uniqueness check
* defines `label2id` and `id2label` in the `TextClassification` template as properties
* replaces the `object.__setatt... | true |
894,655,157 | https://api.github.com/repos/huggingface/datasets/issues/2375 | https://github.com/huggingface/datasets/pull/2375 | 2,375 | Dataset Streaming | closed | 0 | 2021-05-18T18:20:00 | 2021-06-23T16:35:02 | 2021-06-23T16:35:01 | lhoestq | [] | # Dataset Streaming
## API
Current API is
```python
from datasets import load_dataset
# Load an IterableDataset without downloading data
snli = load_dataset("snli", streaming=True)
# Access examples by streaming data
print(next(iter(snli["train"])))
# {'premise': 'A person on a horse jumps over a br... | true |
894,579,364 | https://api.github.com/repos/huggingface/datasets/issues/2374 | https://github.com/huggingface/datasets/pull/2374 | 2,374 | add `desc` to `tqdm` in `Dataset.map()` | closed | 5 | 2021-05-18T16:44:29 | 2021-05-27T15:44:04 | 2021-05-26T14:59:21 | bhavitvyamalik | [] | Fixes #2330. Please let me know if anything is also required in this | true |
894,499,909 | https://api.github.com/repos/huggingface/datasets/issues/2373 | https://github.com/huggingface/datasets/issues/2373 | 2,373 | Loading dataset from local path | closed | 1 | 2021-05-18T15:20:50 | 2021-05-18T15:36:36 | 2021-05-18T15:36:35 | kolakows | [] | I'm trying to load a local dataset with the code below
```
ds = datasets.load_dataset('my_script.py',
data_files='corpus.txt',
data_dir='/data/dir',
cache_dir='.')
```
But internally a BuilderConfig is created, which tries to u... | false |
894,496,064 | https://api.github.com/repos/huggingface/datasets/issues/2372 | https://github.com/huggingface/datasets/pull/2372 | 2,372 | ConvQuestions benchmark added | closed | 3 | 2021-05-18T15:16:50 | 2021-05-26T10:31:45 | 2021-05-26T10:31:45 | PhilippChr | [] | Hello,
I would like to integrate our dataset on conversational QA. The answers are grounded in the KG.
The work was published in CIKM 2019 (https://dl.acm.org/doi/10.1145/3357384.3358016).
We hope for further research on how to deal with the challenges of factoid conversational QA.
Thanks! :) | true |
894,193,403 | https://api.github.com/repos/huggingface/datasets/issues/2371 | https://github.com/huggingface/datasets/issues/2371 | 2,371 | Align question answering tasks with sub-domains | closed | 1 | 2021-05-18T09:47:59 | 2023-07-25T16:52:05 | 2023-07-25T16:52:04 | lewtun | [
"enhancement"
] | As pointed out by @thomwolf in #2255 we should consider breaking with the pipeline taxonomy of `transformers` to account for the various types of question-answering domains:
> `question-answering` exists in two forms: abstractive and extractive question answering.
>
> we can keep a generic `question-answering` bu... | false |
893,606,432 | https://api.github.com/repos/huggingface/datasets/issues/2370 | https://github.com/huggingface/datasets/pull/2370 | 2,370 | Adding HendrycksTest dataset | closed | 5 | 2021-05-17T18:53:05 | 2023-05-11T05:42:57 | 2021-05-31T16:37:13 | andyzoujm | [] | Adding Hendrycks test from https://arxiv.org/abs/2009.03300.
I'm having a bit of trouble with dummy data creation because some lines in the csv files aren't being loaded properly (only the first entry loaded in a row of length 6). The dataset is loading just fine. Hope you can kindly help!
Thank you! | true |
893,554,153 | https://api.github.com/repos/huggingface/datasets/issues/2369 | https://github.com/huggingface/datasets/pull/2369 | 2,369 | correct labels of conll2003 | closed | 0 | 2021-05-17T17:37:54 | 2021-05-18T08:27:42 | 2021-05-18T08:27:42 | philschmid | [] | # What does this PR
It fixes/extends the `ner_tags` for conll2003 to include all.
Paper reference https://arxiv.org/pdf/cs/0306050v1.pdf
Model reference https://huggingface.co/elastic/distilbert-base-cased-finetuned-conll03-english/blob/main/config.json
| true |
893,411,076 | https://api.github.com/repos/huggingface/datasets/issues/2368 | https://github.com/huggingface/datasets/pull/2368 | 2,368 | Allow "other-X" in licenses | closed | 0 | 2021-05-17T14:47:54 | 2021-05-17T16:36:27 | 2021-05-17T16:36:27 | gchhablani | [] | This PR allows "other-X" licenses during metadata validation.
@lhoestq | true |
893,317,427 | https://api.github.com/repos/huggingface/datasets/issues/2367 | https://github.com/huggingface/datasets/pull/2367 | 2,367 | Remove getchildren from hyperpartisan news detection | closed | 0 | 2021-05-17T13:10:37 | 2021-05-17T14:07:13 | 2021-05-17T14:07:13 | ghomasHudson | [] | `Element.getchildren()` is now deprecated in the ElementTree library (I think in python 3.9, so it still passes the automated tests which are using 3.6. But for those of us on bleeding-edge distros it now fails).
https://bugs.python.org/issue29209 | true |
893,185,266 | https://api.github.com/repos/huggingface/datasets/issues/2366 | https://github.com/huggingface/datasets/issues/2366 | 2,366 | Json loader fails if user-specified features don't match the json data fields order | closed | 0 | 2021-05-17T10:26:08 | 2021-06-16T10:47:49 | 2021-06-16T10:47:49 | lhoestq | [
"bug"
] | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if s... | false |
893,179,697 | https://api.github.com/repos/huggingface/datasets/issues/2365 | https://github.com/huggingface/datasets/issues/2365 | 2,365 | Missing ClassLabel encoding in Json loader | closed | 0 | 2021-05-17T10:19:10 | 2021-06-28T15:05:34 | 2021-06-28T15:05:34 | lhoestq | [
"bug"
] | Currently if you want to load a json dataset this way
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then if your features has ClassLabel types and if your json data needs class label encoding (i.e. if the labels in the json files are strings and not integers), then it would ... | false |
892,420,500 | https://api.github.com/repos/huggingface/datasets/issues/2364 | https://github.com/huggingface/datasets/pull/2364 | 2,364 | README updated for SNLI, MNLI | closed | 2 | 2021-05-15T11:37:59 | 2021-05-17T14:14:27 | 2021-05-17T13:34:19 | bhavitvyamalik | [] | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | true |
892,100,749 | https://api.github.com/repos/huggingface/datasets/issues/2362 | https://github.com/huggingface/datasets/pull/2362 | 2,362 | Fix web_nlg metadata | closed | 3 | 2021-05-14T17:15:07 | 2021-05-17T13:44:17 | 2021-05-17T13:42:28 | julien-c | [] | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| true |
891,982,808 | https://api.github.com/repos/huggingface/datasets/issues/2361 | https://github.com/huggingface/datasets/pull/2361 | 2,361 | Preserve dtype for numpy/torch/tf/jax arrays | closed | 6 | 2021-05-14T14:45:23 | 2021-08-17T08:30:04 | 2021-08-17T08:30:04 | bhavitvyamalik | [] | Fixes #625. This lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array. | true |
891,965,964 | https://api.github.com/repos/huggingface/datasets/issues/2360 | https://github.com/huggingface/datasets/issues/2360 | 2,360 | Automatically detect datasets with compatible task schemas | open | 0 | 2021-05-14T14:23:40 | 2021-05-14T14:23:40 | null | lewtun | [
"enhancement"
] | See description of #2255 for details.
| false |
891,946,017 | https://api.github.com/repos/huggingface/datasets/issues/2359 | https://github.com/huggingface/datasets/issues/2359 | 2,359 | Allow model labels to be passed during task preparation | closed | 1 | 2021-05-14T13:58:28 | 2022-10-05T17:37:22 | 2022-10-05T17:37:22 | lewtun | [] | Models have a config with label2id. And we have the same for datasets with the ClassLabel feature type. At one point either the model or the dataset must sync with the other. It would be great to do that on the dataset side.
For example for sentiment classification on amazon reviews with you could have these labels:... | false |
891,269,577 | https://api.github.com/repos/huggingface/datasets/issues/2358 | https://github.com/huggingface/datasets/pull/2358 | 2,358 | Roman Urdu Stopwords List | closed | 2 | 2021-05-13T18:29:27 | 2021-05-19T08:50:43 | 2021-05-17T14:05:10 | devzohaib | [] | A list of most frequently used Roman Urdu words with different spellings and usages.
This is a very basic effort to collect some basic stopwords for Roman Urdu to help efforts of analyzing text data in roman Urdu which makes up a huge part of daily internet interaction of Roman-Urdu users. | true |
890,595,693 | https://api.github.com/repos/huggingface/datasets/issues/2357 | https://github.com/huggingface/datasets/pull/2357 | 2,357 | Adding Microsoft CodeXGlue Datasets | closed | 16 | 2021-05-13T00:43:01 | 2021-06-08T09:29:57 | 2021-06-08T09:29:57 | ncoop57 | [] | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR wi... | true |
890,484,408 | https://api.github.com/repos/huggingface/datasets/issues/2355 | https://github.com/huggingface/datasets/pull/2355 | 2,355 | normalized TOCs and titles in data cards | closed | 3 | 2021-05-12T20:59:59 | 2021-05-14T13:23:12 | 2021-05-14T13:23:12 | yjernite | [] | I started fixing some of the READMEs that were failing the tests introduced by @gchhablani but then realized that there were some consistent differences between earlier and newer versions of some of the titles (e.g. Data Splits vs Data Splits Sample Size, Supported Tasks vs Supported Tasks and Leaderboards). We also ha... | true |
890,439,523 | https://api.github.com/repos/huggingface/datasets/issues/2354 | https://github.com/huggingface/datasets/issues/2354 | 2,354 | Document DatasetInfo attributes | closed | 0 | 2021-05-12T20:01:29 | 2021-05-22T09:26:14 | 2021-05-22T09:26:14 | lewtun | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
As noted in PR #2255, the attributes of `DatasetInfo` are not documented in the [docs](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=datasetinfo#datasetinfo). It would be nice to do so :)
| false |
890,296,262 | https://api.github.com/repos/huggingface/datasets/issues/2353 | https://github.com/huggingface/datasets/pull/2353 | 2,353 | Update README vallidation rules | closed | 0 | 2021-05-12T16:57:26 | 2021-05-14T08:56:06 | 2021-05-14T08:56:06 | gchhablani | [] | This PR allows unexpected subsections under third-level headings. All except `Contributions`.
@lhoestq | true |
889,810,100 | https://api.github.com/repos/huggingface/datasets/issues/2352 | https://github.com/huggingface/datasets/pull/2352 | 2,352 | Set to_json default to JSON lines | closed | 2 | 2021-05-12T08:19:25 | 2021-05-21T09:01:14 | 2021-05-21T09:01:13 | albertvillanova | [] | With this PR, the method `Dataset.to_json`:
- is added to the docs
- defaults to JSON lines | true |
889,584,953 | https://api.github.com/repos/huggingface/datasets/issues/2351 | https://github.com/huggingface/datasets/pull/2351 | 2,351 | simpllify faiss index save | closed | 0 | 2021-05-12T03:54:10 | 2021-05-17T13:41:41 | 2021-05-17T13:41:41 | Guitaricet | [] | Fixes #2350
In some cases, Faiss GPU index objects do not have neither "device" nor "getDevice". Possibly this happens when some part of the index is computed on CPU.
In particular, this would happen with the index `OPQ16_128,IVF512,PQ32` (issue #2350). I did check it, but it is likely that `OPQ` or `PQ` transfor... | true |
889,580,247 | https://api.github.com/repos/huggingface/datasets/issues/2350 | https://github.com/huggingface/datasets/issues/2350 | 2,350 | `FaissIndex.save` throws error on GPU | closed | 1 | 2021-05-12T03:41:56 | 2021-05-17T13:41:41 | 2021-05-17T13:41:41 | Guitaricet | [
"bug"
] | ## Describe the bug
After training an index with a factory string `OPQ16_128,IVF512,PQ32` on GPU, `.save_faiss_index` throws this error.
```
File "index_wikipedia.py", line 119, in <module>
data["train"].save_faiss_index("text_emb", index_save_path)
File "/home/vlialin/miniconda3/envs/cat/lib/python3.8... | false |
888,586,018 | https://api.github.com/repos/huggingface/datasets/issues/2349 | https://github.com/huggingface/datasets/pull/2349 | 2,349 | Update task_ids for Ascent KB | closed | 0 | 2021-05-11T20:44:33 | 2021-05-17T10:53:14 | 2021-05-17T10:48:34 | phongnt570 | [] | This "other-other-knowledge-base" task is better suited for the dataset. | true |
887,927,737 | https://api.github.com/repos/huggingface/datasets/issues/2348 | https://github.com/huggingface/datasets/pull/2348 | 2,348 | Add tests for dataset cards | closed | 2 | 2021-05-11T17:14:27 | 2021-05-21T12:10:47 | 2021-05-21T12:10:47 | gchhablani | [] | Adding tests for dataset cards
This PR will potentially remove the scripts being used for dataset tags and readme validation.
Additionally, this will allow testing dataset readmes by providing the name as follows:
```bash
pytest tests/test_dataset_cards.py::test_dataset_tags[fashion_mnist]
```
and
```bas... | true |
887,404,868 | https://api.github.com/repos/huggingface/datasets/issues/2347 | https://github.com/huggingface/datasets/issues/2347 | 2,347 | Add an API to access the language and pretty name of a dataset | closed | 6 | 2021-05-11T14:10:08 | 2022-10-05T17:16:54 | 2022-10-05T17:16:53 | sgugger | [
"enhancement"
] | It would be super nice to have an API to get some metadata of the dataset from the name and args passed to `load_dataset`. This way we could programmatically infer the language and the name of a dataset when creating model cards automatically in the Transformers examples scripts. | false |
886,632,114 | https://api.github.com/repos/huggingface/datasets/issues/2346 | https://github.com/huggingface/datasets/pull/2346 | 2,346 | Add Qasper Dataset | closed | 1 | 2021-05-11T09:25:44 | 2021-05-18T12:28:28 | 2021-05-18T12:28:28 | cceyda | [] | [Question Answering on Scientific Research Papers](https://allenai.org/project/qasper/home)
Doing NLP on NLP papers to do NLP ♻️ I had to add it~
- [x] Add README (just gotta fill out some more )
- [x] Dataloader code
- [x] Make dummy dataset
- [x] generate dataset infos
- [x] Tests
| true |
886,586,872 | https://api.github.com/repos/huggingface/datasets/issues/2345 | https://github.com/huggingface/datasets/issues/2345 | 2,345 | [Question] How to move and reuse preprocessed dataset? | closed | 4 | 2021-05-11T09:09:17 | 2021-06-11T04:39:11 | 2021-06-11T04:39:11 | AtmaHou | [] | Hi, I am training a gpt-2 from scratch using run_clm.py.
I want to move and reuse the preprocessed dataset (It take 2 hour to preprocess),
I tried to :
copy path_to_cache_dir/datasets to new_cache_dir/datasets
set export HF_DATASETS_CACHE="new_cache_dir/"
but the program still re-preprocess the whole dataset... | false |
885,331,505 | https://api.github.com/repos/huggingface/datasets/issues/2344 | https://github.com/huggingface/datasets/issues/2344 | 2,344 | Is there a way to join multiple datasets in one? | open | 2 | 2021-05-10T23:16:10 | 2022-10-05T17:27:05 | null | avacaondata | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Add... | false |
883,208,539 | https://api.github.com/repos/huggingface/datasets/issues/2343 | https://github.com/huggingface/datasets/issues/2343 | 2,343 | Columns are removed before or after map function applied? | open | 1 | 2021-05-10T02:36:20 | 2022-10-24T11:31:55 | null | taghizad3h | [
"bug"
] | ## Describe the bug
According to the documentation when applying map function the [remove_columns ](https://huggingface.co/docs/datasets/processing.html#removing-columns) will be removed after they are passed to the function, but in the [source code](https://huggingface.co/docs/datasets/package_reference/main_classes.... | false |
882,981,420 | https://api.github.com/repos/huggingface/datasets/issues/2342 | https://github.com/huggingface/datasets/pull/2342 | 2,342 | Docs - CER above 1 | closed | 0 | 2021-05-09T23:41:00 | 2021-05-10T13:34:00 | 2021-05-10T13:34:00 | borisdayma | [] | CER can actually be greater than 1. | true |
882,370,933 | https://api.github.com/repos/huggingface/datasets/issues/2341 | https://github.com/huggingface/datasets/pull/2341 | 2,341 | Added the Ascent KB | closed | 1 | 2021-05-09T14:17:39 | 2021-05-11T09:16:59 | 2021-05-11T09:16:59 | phongnt570 | [] | Added the Ascent Commonsense KB of 8.9M assertions.
- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905)
- Website: https://ascent.mpi-inf.mpg.de/
(I am the author of the dataset) | true |
882,370,824 | https://api.github.com/repos/huggingface/datasets/issues/2340 | https://github.com/huggingface/datasets/pull/2340 | 2,340 | More consistent copy logic | closed | 0 | 2021-05-09T14:17:33 | 2021-05-11T08:58:33 | 2021-05-11T08:58:33 | mariosasko | [] | Use `info.copy()` instead of `copy.deepcopy(info)`.
`Features.copy` now creates a deep copy. | true |
882,046,077 | https://api.github.com/repos/huggingface/datasets/issues/2338 | https://github.com/huggingface/datasets/pull/2338 | 2,338 | fixed download link for web_science | closed | 0 | 2021-05-09T09:12:20 | 2021-05-10T13:35:53 | 2021-05-10T13:35:53 | bhavitvyamalik | [] | Fixes #2337. Should work with:
`dataset = load_dataset("web_of_science", "WOS11967", ignore_verifications=True)` | true |
881,610,567 | https://api.github.com/repos/huggingface/datasets/issues/2337 | https://github.com/huggingface/datasets/issues/2337 | 2,337 | NonMatchingChecksumError for web_of_science dataset | closed | 1 | 2021-05-09T02:02:02 | 2021-05-10T13:35:53 | 2021-05-10T13:35:53 | nbroad1881 | [
"bug"
] | NonMatchingChecksumError when trying to download the web_of_science dataset.
>NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://data.mendeley.com/datasets/9rw3vkcfy4/6/files/c9ea673d-5542-44c0-ab7b-f1311f7d61df/WebOfScience.zip?dl=1']
Setting `ignore_verfications=True` results... | false |
881,298,783 | https://api.github.com/repos/huggingface/datasets/issues/2336 | https://github.com/huggingface/datasets/pull/2336 | 2,336 | Fix overflow issue in interpolation search | closed | 3 | 2021-05-08T20:51:36 | 2021-05-10T13:29:07 | 2021-05-10T13:26:12 | mariosasko | [] | Fixes #2335
More info about this error can be found [here](https://stackoverflow.com/questions/53239890/why-do-i-keep-getting-this-error-runtimewarning-overflow-encountered-in-int-sc/53240100). | true |
881,291,887 | https://api.github.com/repos/huggingface/datasets/issues/2335 | https://github.com/huggingface/datasets/issues/2335 | 2,335 | Index error in Dataset.map | closed | 0 | 2021-05-08T20:44:57 | 2021-05-10T13:26:12 | 2021-05-10T13:26:12 | mariosasko | [
"bug"
] | The following code, if executed on master, raises an IndexError (due to overflow):
```python
>>> from datasets import *
>>> d = load_dataset("bookcorpus", split="train")
Reusing dataset bookcorpus (C:\Users\Mario\.cache\huggingface\datasets\bookcorpus\plain_text\1.0.0\44662c4a114441c35200992bea923b170e6f13f2f0beb7c... | false |
879,810,107 | https://api.github.com/repos/huggingface/datasets/issues/2334 | https://github.com/huggingface/datasets/pull/2334 | 2,334 | Updating the DART file checksums in GEM | closed | 1 | 2021-05-07T21:53:44 | 2021-05-07T22:18:10 | 2021-05-07T22:18:10 | yjernite | [] | The DART files were just updated on the source GitHub
https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab | true |
879,214,067 | https://api.github.com/repos/huggingface/datasets/issues/2333 | https://github.com/huggingface/datasets/pull/2333 | 2,333 | Fix duplicate keys | closed | 1 | 2021-05-07T15:28:08 | 2021-05-08T21:47:31 | 2021-05-07T15:57:08 | lhoestq | [] | As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file. | true |
879,041,608 | https://api.github.com/repos/huggingface/datasets/issues/2332 | https://github.com/huggingface/datasets/pull/2332 | 2,332 | Add note about indices mapping in save_to_disk docstring | closed | 0 | 2021-05-07T13:49:42 | 2021-05-07T17:20:48 | 2021-05-07T17:20:48 | lhoestq | [] | true | |
879,031,427 | https://api.github.com/repos/huggingface/datasets/issues/2331 | https://github.com/huggingface/datasets/issues/2331 | 2,331 | Add Topical-Chat | open | 0 | 2021-05-07T13:43:59 | 2021-05-07T13:43:59 | null | ktangri | [
"dataset request"
] | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **... | false |
878,490,927 | https://api.github.com/repos/huggingface/datasets/issues/2330 | https://github.com/huggingface/datasets/issues/2330 | 2,330 | Allow passing `desc` to `tqdm` in `Dataset.map()` | closed | 2 | 2021-05-07T05:52:54 | 2021-05-26T14:59:21 | 2021-05-26T14:59:21 | changjonathanc | [
"enhancement",
"good first issue"
] | It's normal to have many `map()` calls, and some of them can take a few minutes,
it would be nice to have a description on the progress bar.
Alternative solution:
Print the description before/after the `map()` call. | false |
877,924,198 | https://api.github.com/repos/huggingface/datasets/issues/2329 | https://github.com/huggingface/datasets/pull/2329 | 2,329 | Add cache dir for in-memory datasets | closed | 7 | 2021-05-06T19:35:32 | 2021-06-08T19:46:48 | 2021-06-08T19:06:46 | mariosasko | [] | Adds the cache dir attribute to DatasetInfo as suggested by @lhoestq.
Should fix #2322 | true |
877,673,896 | https://api.github.com/repos/huggingface/datasets/issues/2328 | https://github.com/huggingface/datasets/pull/2328 | 2,328 | Add Matthews/Pearson/Spearman correlation metrics | closed | 0 | 2021-05-06T16:09:27 | 2021-05-06T16:58:10 | 2021-05-06T16:58:10 | lhoestq | [] | Added three metrics:
- The Matthews correlation coefficient (from sklearn)
- The Pearson correlation coefficient (from scipy)
- The Spearman correlation coefficient (from scipy)
cc @sgugger | true |
877,565,831 | https://api.github.com/repos/huggingface/datasets/issues/2327 | https://github.com/huggingface/datasets/issues/2327 | 2,327 | A syntax error in example | closed | 2 | 2021-05-06T14:34:44 | 2021-05-20T03:04:19 | 2021-05-20T03:04:19 | mymusise | [
"bug"
] | 
Sorry to report with an image, I can't find the template source code of this snippet. | false |
876,829,254 | https://api.github.com/repos/huggingface/datasets/issues/2326 | https://github.com/huggingface/datasets/pull/2326 | 2,326 | Enable auto-download for PAN-X / Wikiann domain in XTREME | closed | 0 | 2021-05-05T20:58:38 | 2021-05-07T08:41:10 | 2021-05-07T08:41:10 | lewtun | [] | This PR replaces the manual download of the `PAN-X.lang` domains with an auto-download from a Dropbox link provided by the Wikiann author. We also add the relevant dummy data for these domains.
While re-generating `dataset_infos.json` I ran into a `KeyError` in the `udpos.Arabic` domain so have included a fix for th... | true |
876,653,121 | https://api.github.com/repos/huggingface/datasets/issues/2325 | https://github.com/huggingface/datasets/pull/2325 | 2,325 | Added the HLGD dataset | closed | 2 | 2021-05-05T16:53:29 | 2021-05-12T14:55:13 | 2021-05-12T14:16:38 | tingofurro | [] | Added the Headline Grouping Dataset (HLGD), from the NAACL2021 paper: News Headline Grouping as a Challenging NLU Task
Dataset Link: https://github.com/tingofurro/headline_grouping
Paper link: https://people.eecs.berkeley.edu/~phillab/pdfs/NAACL2021_HLG.pdf | true |
876,602,064 | https://api.github.com/repos/huggingface/datasets/issues/2324 | https://github.com/huggingface/datasets/pull/2324 | 2,324 | Create Audio feature | closed | 30 | 2021-05-05T15:55:22 | 2021-10-13T10:26:33 | 2021-10-13T10:26:33 | albertvillanova | [] | Create `Audio` feature to handle raw audio files.
Some decisions to be further discussed:
- I have chosen `soundfile` as the audio library; another interesting library is `librosa`, but this requires `soundfile` (see [here](https://github.com/librosa/librosa/blob/main/setup.cfg#L53)). If we require some more advanc... | true |
876,438,507 | https://api.github.com/repos/huggingface/datasets/issues/2323 | https://github.com/huggingface/datasets/issues/2323 | 2,323 | load_dataset("timit_asr") gives back duplicates of just one sample text | closed | 3 | 2021-05-05T13:14:48 | 2021-05-07T10:32:30 | 2021-05-07T10:32:30 | ekeleshian | [
"bug"
] | ## Describe the bug
When you look up on key ["train"] and then ['text'], you get back a list with just one sentence duplicated 4620 times. Namely, the sentence "Would such an act of refusal be useful?". Similarly when you look up ['test'] and then ['text'], the list is one sentence repeated "The bungalow was pleasant... | false |
876,383,853 | https://api.github.com/repos/huggingface/datasets/issues/2322 | https://github.com/huggingface/datasets/issues/2322 | 2,322 | Calls to map are not cached. | closed | 6 | 2021-05-05T12:11:27 | 2021-06-08T19:10:02 | 2021-06-08T19:08:21 | villmow | [
"bug"
] | ## Describe the bug
Somehow caching does not work for me anymore. Am I doing something wrong, or is there anything that I missed?
## Steps to reproduce the bug
```python
import datasets
datasets.set_caching_enabled(True)
sst = datasets.load_dataset("sst")
def foo(samples, i):
print("executed", i[:10])... | false |
876,304,364 | https://api.github.com/repos/huggingface/datasets/issues/2321 | https://github.com/huggingface/datasets/pull/2321 | 2,321 | Set encoding in OSCAR dataset | closed | 0 | 2021-05-05T10:27:03 | 2021-05-05T10:50:55 | 2021-05-05T10:50:55 | albertvillanova | [] | Set explicit `utf-8` encoding in OSCAR dataset, to avoid using the system default `cp1252` on Windows platforms.
Fix #2319. | true |
876,257,026 | https://api.github.com/repos/huggingface/datasets/issues/2320 | https://github.com/huggingface/datasets/pull/2320 | 2,320 | Set default name in init_dynamic_modules | closed | 0 | 2021-05-05T09:30:03 | 2021-05-06T07:57:54 | 2021-05-06T07:57:54 | albertvillanova | [] | Set default value for the name of dynamic modules.
Close #2318. | true |
876,251,376 | https://api.github.com/repos/huggingface/datasets/issues/2319 | https://github.com/huggingface/datasets/issues/2319 | 2,319 | UnicodeDecodeError for OSCAR (Afrikaans) | closed | 3 | 2021-05-05T09:22:52 | 2021-05-05T10:57:31 | 2021-05-05T10:50:55 | sgraaf | [
"bug"
] | ## Describe the bug
When loading the [OSCAR dataset](https://huggingface.co/datasets/oscar) (specifically `unshuffled_deduplicated_af`), I encounter a `UnicodeDecodeError`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("oscar", "unshuffled_deduplicated_af")
```... | false |
876,212,460 | https://api.github.com/repos/huggingface/datasets/issues/2318 | https://github.com/huggingface/datasets/issues/2318 | 2,318 | [api request] API to obtain "dataset_module" dynamic path? | closed | 5 | 2021-05-05T08:40:48 | 2021-05-06T08:45:45 | 2021-05-06T07:57:54 | richardliaw | [
"enhancement"
] | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
This is an awesome library.
It seems like the dynamic module path in this library has broken some of hyperparameter tuning functionality: https://discuss.huggingface.co/t/using-hyperparamet... | false |
875,767,318 | https://api.github.com/repos/huggingface/datasets/issues/2317 | https://github.com/huggingface/datasets/pull/2317 | 2,317 | Fix incorrect version specification for the pyarrow package | closed | 0 | 2021-05-04T19:30:20 | 2021-05-05T10:09:16 | 2021-05-05T09:21:58 | cemilcengiz | [] | This PR addresses the bug in the pyarrow version specification, which is detailed in #2316 .
Simply, I put a comma between the version bounds.
Fix #2316. | true |
875,756,353 | https://api.github.com/repos/huggingface/datasets/issues/2316 | https://github.com/huggingface/datasets/issues/2316 | 2,316 | Incorrect version specification for pyarrow | closed | 1 | 2021-05-04T19:15:11 | 2021-05-05T10:10:03 | 2021-05-05T10:10:03 | cemilcengiz | [
"bug"
] | ## Describe the bug
The pyarrow dependency is incorrectly specified in setup.py file, in [this line](https://github.com/huggingface/datasets/blob/3a3e5a4da20bfcd75f8b6a6869b240af8feccc12/setup.py#L77).
Also as a snippet:
```python
"pyarrow>=1.0.0<4.0.0",
```
## Steps to reproduce the bug
```bash
pip install... | false |
875,742,200 | https://api.github.com/repos/huggingface/datasets/issues/2315 | https://github.com/huggingface/datasets/pull/2315 | 2,315 | Datasets cli improvements | closed | 1 | 2021-05-04T18:55:11 | 2021-05-10T16:36:51 | 2021-05-10T16:36:50 | mariosasko | [] | This PR:
* replaces the code from the `bug_report.md` that was used to get relevant system info with a dedicated command (a more elegant approach than copy-pasting the code IMO)
* removes the `download` command (copied from the transformers repo?)
* adds missing help messages to the cli commands
| true |
875,729,271 | https://api.github.com/repos/huggingface/datasets/issues/2314 | https://github.com/huggingface/datasets/pull/2314 | 2,314 | Minor refactor prepare_module | closed | 2 | 2021-05-04T18:37:26 | 2021-10-13T09:07:34 | 2021-10-13T09:07:34 | albertvillanova | [] | Start to refactor `prepare_module` to try to decouple functionality.
This PR does:
- extract function `_initialize_dynamic_modules_namespace_package`
- extract function `_find_module_in_github_or_s3`
- some renaming of variables
- use of f-strings | true |
875,475,367 | https://api.github.com/repos/huggingface/datasets/issues/2313 | https://github.com/huggingface/datasets/pull/2313 | 2,313 | Remove unused head_hf_s3 function | closed | 0 | 2021-05-04T13:42:06 | 2021-05-07T09:31:42 | 2021-05-07T09:31:42 | albertvillanova | [] | Currently, the function `head_hf_s3` is not used:
- neither its returned result is used
- nor it raises any exception, as exceptions are catched and returned (not raised)
This PR removes it. | true |
875,435,726 | https://api.github.com/repos/huggingface/datasets/issues/2312 | https://github.com/huggingface/datasets/pull/2312 | 2,312 | Add rename_columnS method | closed | 1 | 2021-05-04T12:57:53 | 2021-05-04T13:43:13 | 2021-05-04T13:43:12 | SBrandeis | [] | Cherry-picked from #2255 | true |
875,262,208 | https://api.github.com/repos/huggingface/datasets/issues/2311 | https://github.com/huggingface/datasets/pull/2311 | 2,311 | Add SLR52, SLR53 and SLR54 to OpenSLR | closed | 2 | 2021-05-04T09:08:03 | 2021-05-07T09:50:55 | 2021-05-07T09:50:55 | cahya-wirawan | [] | Add large speech datasets for Sinhala, Bengali and Nepali. | true |
875,096,051 | https://api.github.com/repos/huggingface/datasets/issues/2310 | https://github.com/huggingface/datasets/pull/2310 | 2,310 | Update README.md | closed | 1 | 2021-05-04T04:38:01 | 2022-07-06T15:19:58 | 2022-07-06T15:19:58 | cryoff | [] | Provides description of data instances and dataset features | true |
874,644,990 | https://api.github.com/repos/huggingface/datasets/issues/2309 | https://github.com/huggingface/datasets/pull/2309 | 2,309 | Fix conda release | closed | 0 | 2021-05-03T14:52:59 | 2021-05-03T16:01:17 | 2021-05-03T16:01:17 | lhoestq | [] | There were a few issues with conda releases (they've been failing for a while now).
To fix this I had to:
- add the --single-version-externally-managed tag to the build stage (suggestion from [here](https://stackoverflow.com/a/64825075))
- set the python version of the conda build stage to 3.8 since 3.9 isn't suppor... | true |
873,961,435 | https://api.github.com/repos/huggingface/datasets/issues/2302 | https://github.com/huggingface/datasets/pull/2302 | 2,302 | Add SubjQA dataset | closed | 4 | 2021-05-02T14:51:20 | 2021-05-10T09:21:19 | 2021-05-10T09:21:19 | lewtun | [] | Hello datasetters 🙂!
Here's an interesting dataset about extractive question-answering on _subjective_ product / restaurant reviews. It's quite challenging for models fine-tuned on SQuAD and provides a nice example of domain adaptation (i.e. fine-tuning a SQuAD model on this domain gives better performance).
I f... | true |
873,941,266 | https://api.github.com/repos/huggingface/datasets/issues/2301 | https://github.com/huggingface/datasets/issues/2301 | 2,301 | Unable to setup dev env on Windows | closed | 2 | 2021-05-02T13:20:42 | 2021-05-03T15:18:01 | 2021-05-03T15:17:34 | gchhablani | [] | Hi
I tried installing the `".[dev]"` version on Windows 10 after cloning.
Here is the error I'm facing:
```bat
(env) C:\testing\datasets>pip install -e ".[dev]"
Obtaining file:///C:/testing/datasets
Requirement already satisfied: numpy>=1.17 in c:\programdata\anaconda3\envs\env\lib\site-packages (from datas... | false |
873,928,169 | https://api.github.com/repos/huggingface/datasets/issues/2300 | https://github.com/huggingface/datasets/issues/2300 | 2,300 | Add VoxPopuli | closed | 4 | 2021-05-02T12:17:40 | 2023-02-28T17:43:52 | 2023-02-28T17:43:51 | patrickvonplaten | [
"dataset request",
"speech"
] | ## Adding a Dataset
- **Name:** Voxpopuli
- **Description:** VoxPopuli is raw data is collected from 2009-2020 European Parliament event recordings
- **Paper:** https://arxiv.org/abs/2101.00390
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** biggest unlabeled speech dataset
**Note**:... | false |
873,914,717 | https://api.github.com/repos/huggingface/datasets/issues/2299 | https://github.com/huggingface/datasets/issues/2299 | 2,299 | My iPhone | closed | 0 | 2021-05-02T11:11:11 | 2021-07-23T09:24:16 | 2021-05-03T08:17:38 | Jasonbuchanan1983 | [] | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | false |
873,771,942 | https://api.github.com/repos/huggingface/datasets/issues/2298 | https://github.com/huggingface/datasets/pull/2298 | 2,298 | Mapping in the distributed setting | closed | 0 | 2021-05-01T21:23:05 | 2021-05-03T13:54:53 | 2021-05-03T13:54:53 | TevenLeScao | [] | The barrier trick for distributed mapping as discussed on Thursday with @lhoestq | true |
872,974,907 | https://api.github.com/repos/huggingface/datasets/issues/2296 | https://github.com/huggingface/datasets/issues/2296 | 2,296 | 1 | closed | 0 | 2021-04-30T17:53:49 | 2021-05-03T08:17:31 | 2021-05-03T08:17:31 | zinnyi | [
"dataset request"
] | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | false |
872,902,867 | https://api.github.com/repos/huggingface/datasets/issues/2295 | https://github.com/huggingface/datasets/pull/2295 | 2,295 | Create ExtractManager | closed | 2 | 2021-04-30T17:13:34 | 2021-07-12T14:12:03 | 2021-07-08T08:11:49 | albertvillanova | [
"refactoring"
] | Perform refactoring to decouple extract functionality. | true |
872,136,075 | https://api.github.com/repos/huggingface/datasets/issues/2294 | https://github.com/huggingface/datasets/issues/2294 | 2,294 | Slow #0 when using map to tokenize. | open | 3 | 2021-04-30T08:00:33 | 2021-05-04T11:00:11 | null | VerdureChen | [] | Hi, _datasets_ is really amazing! I am following [run_mlm_no_trainer.py](url) to pre-train BERT, and it uses `tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
loa... | false |
872,079,385 | https://api.github.com/repos/huggingface/datasets/issues/2293 | https://github.com/huggingface/datasets/pull/2293 | 2,293 | imdb dataset from Don't Stop Pretraining Paper | closed | 0 | 2021-04-30T06:40:48 | 2021-04-30T06:54:25 | 2021-04-30T06:54:25 | BobbyManion | [] | true | |
871,230,183 | https://api.github.com/repos/huggingface/datasets/issues/2292 | https://github.com/huggingface/datasets/pull/2292 | 2,292 | Fixed typo seperate->separate | closed | 0 | 2021-04-29T16:40:53 | 2021-04-30T13:29:18 | 2021-04-30T13:03:12 | laksh9950 | [] | true | |
871,216,757 | https://api.github.com/repos/huggingface/datasets/issues/2291 | https://github.com/huggingface/datasets/pull/2291 | 2,291 | Don't copy recordbatches in memory during a table deepcopy | closed | 0 | 2021-04-29T16:26:05 | 2021-04-29T16:34:35 | 2021-04-29T16:34:34 | lhoestq | [] | Fix issue #2276 and hopefully #2134
The recordbatches of the `IndexedTableMixin` used to speed up queries to the table were copied in memory during a table deepcopy.
This resulted in `concatenate_datasets`, `load_from_disk` and other methods to always bring the data in memory.
I fixed the copy similarly to #2287... | true |
871,145,817 | https://api.github.com/repos/huggingface/datasets/issues/2290 | https://github.com/huggingface/datasets/pull/2290 | 2,290 | Bbaw egyptian | closed | 9 | 2021-04-29T15:27:58 | 2021-05-06T17:25:25 | 2021-05-06T17:25:25 | phiwi | [] | This is the "hieroglyph corpus" that I could unfortunately not contribute during the marathon. I re-extracted it again now, so that it is in the state as used in my paper (seee documentation). I hope it satiesfies your requirements and wish every scientist out their loads of fun deciphering a 5.000 years old language :... | true |
871,118,573 | https://api.github.com/repos/huggingface/datasets/issues/2289 | https://github.com/huggingface/datasets/pull/2289 | 2,289 | Allow collaborators to self-assign issues | closed | 2 | 2021-04-29T15:07:06 | 2021-04-30T18:28:16 | 2021-04-30T18:28:16 | albertvillanova | [] | Allow collaborators (without write access to the repository) to self-assign issues.
In order to self-assign an issue, they have to comment it with the word: `#take` or `#self-assign`. | true |
871,111,235 | https://api.github.com/repos/huggingface/datasets/issues/2288 | https://github.com/huggingface/datasets/issues/2288 | 2,288 | Load_dataset for local CSV files | closed | 3 | 2021-04-29T15:01:10 | 2021-06-15T13:49:26 | 2021-06-15T13:49:26 | sstojanoska | [
"bug"
] | The method load_dataset fails to correctly load a dataset from csv.
Moreover, I am working on a token-classification task ( POS tagging) , where each row in my CSV contains two columns each of them having a list of strings.
row example:
```tokens | labels
['I' , 'am', 'John'] | ['PRON', 'AUX', 'PROPN' ]
``... | false |
871,063,374 | https://api.github.com/repos/huggingface/datasets/issues/2287 | https://github.com/huggingface/datasets/pull/2287 | 2,287 | Avoid copying table's record batches | closed | 1 | 2021-04-29T14:15:01 | 2021-04-29T16:34:23 | 2021-04-29T16:34:22 | mariosasko | [] | Fixes #2276 | true |
871,032,393 | https://api.github.com/repos/huggingface/datasets/issues/2286 | https://github.com/huggingface/datasets/pull/2286 | 2,286 | Fix metadata validation with config names | closed | 0 | 2021-04-29T13:44:32 | 2021-04-29T14:07:29 | 2021-04-29T14:07:28 | lhoestq | [] | I noticed in https://github.com/huggingface/datasets/pull/2280 that the metadata validator doesn't parse the tags in the readme properly when then contain the tags per config. | true |
871,005,236 | https://api.github.com/repos/huggingface/datasets/issues/2285 | https://github.com/huggingface/datasets/issues/2285 | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | closed | 2 | 2021-04-29T13:16:45 | 2021-05-19T07:22:45 | 2021-05-19T07:22:39 | danieldiezmallo | [] | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text datas... | false |
870,932,710 | https://api.github.com/repos/huggingface/datasets/issues/2284 | https://github.com/huggingface/datasets/pull/2284 | 2,284 | Initialize Imdb dataset as used in Don't Stop Pretraining Paper | closed | 0 | 2021-04-29T11:52:38 | 2021-04-29T12:54:34 | 2021-04-29T12:54:34 | BobbyManion | [] | true | |
870,926,475 | https://api.github.com/repos/huggingface/datasets/issues/2283 | https://github.com/huggingface/datasets/pull/2283 | 2,283 | Initialize imdb dataset from don't stop pretraining paper | closed | 0 | 2021-04-29T11:44:54 | 2021-04-29T11:50:24 | 2021-04-29T11:50:24 | BobbyManion | [] | true | |
870,900,332 | https://api.github.com/repos/huggingface/datasets/issues/2282 | https://github.com/huggingface/datasets/pull/2282 | 2,282 | Initialize imdb dataset from don't stop pretraining paper | closed | 0 | 2021-04-29T11:17:56 | 2021-04-29T11:43:51 | 2021-04-29T11:43:51 | BobbyManion | [] | true | |
870,792,784 | https://api.github.com/repos/huggingface/datasets/issues/2281 | https://github.com/huggingface/datasets/pull/2281 | 2,281 | Update multi_woz_v22 checksum | closed | 0 | 2021-04-29T09:09:11 | 2021-04-29T13:41:35 | 2021-04-29T13:41:34 | lhoestq | [] | Fix issue https://github.com/huggingface/datasets/issues/1876
The files were changed in https://github.com/budzianowski/multiwoz/pull/72 | true |
870,780,431 | https://api.github.com/repos/huggingface/datasets/issues/2280 | https://github.com/huggingface/datasets/pull/2280 | 2,280 | Fixed typo seperate->separate | closed | 2 | 2021-04-29T08:55:46 | 2021-04-29T16:41:22 | 2021-04-29T16:41:16 | laksh9950 | [] | true | |
870,431,662 | https://api.github.com/repos/huggingface/datasets/issues/2279 | https://github.com/huggingface/datasets/issues/2279 | 2,279 | Compatibility with Ubuntu 18 and GLIBC 2.27? | closed | 2 | 2021-04-28T22:08:07 | 2021-04-29T07:42:42 | 2021-04-29T07:42:42 | tginart | [
"bug"
] | ## Describe the bug
For use on Ubuntu systems, it seems that datasets requires GLIBC 2.29. However, Ubuntu 18 runs with GLIBC 2.27 and it seems [non-trivial to upgrade GLIBC to 2.29 for Ubuntu 18 users](https://www.digitalocean.com/community/questions/how-install-glibc-2-29-or-higher-in-ubuntu-18-04).
I'm not sure... | false |
870,088,059 | https://api.github.com/repos/huggingface/datasets/issues/2278 | https://github.com/huggingface/datasets/issues/2278 | 2,278 | Loss result inGptNeoForCasual | closed | 1 | 2021-04-28T15:39:52 | 2021-05-06T16:14:23 | 2021-05-06T16:14:23 | Yossillamm | [
"enhancement"
] | Is there any way you give the " loss" and "logits" results in the gpt neo api? | false |
870,071,994 | https://api.github.com/repos/huggingface/datasets/issues/2277 | https://github.com/huggingface/datasets/pull/2277 | 2,277 | Create CacheManager | open | 0 | 2021-04-28T15:23:42 | 2022-07-06T15:19:48 | null | albertvillanova | [
"refactoring"
] | Perform refactoring to decouple cache functionality (method `as_dataset`). | true |
870,010,511 | https://api.github.com/repos/huggingface/datasets/issues/2276 | https://github.com/huggingface/datasets/issues/2276 | 2,276 | concatenate_datasets loads all the data into memory | closed | 7 | 2021-04-28T14:27:21 | 2021-05-03T08:41:55 | 2021-05-03T08:41:55 | chbensch | [
"bug"
] | ## Describe the bug
When I try to concatenate 2 datasets (10GB each) , the entire data is loaded into memory instead of being written directly to disk.
Interestingly, this happens when trying to save the new dataset to disk or concatenating it again.
 and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107... | false |
869,186,276 | https://api.github.com/repos/huggingface/datasets/issues/2274 | https://github.com/huggingface/datasets/pull/2274 | 2,274 | Always update metadata in arrow schema | closed | 0 | 2021-04-27T19:21:57 | 2022-06-03T08:31:19 | 2021-04-29T09:57:50 | lhoestq | [] | We store a redundant copy of the features in the metadata of the schema of the arrow table. This is used to recover the features when doing `Dataset.from_file`. These metadata are updated after each transfor, that changes the feature types.
For each function that transforms the feature types of the dataset, I added ... | true |
869,046,290 | https://api.github.com/repos/huggingface/datasets/issues/2273 | https://github.com/huggingface/datasets/pull/2273 | 2,273 | Added CUAD metrics | closed | 0 | 2021-04-27T16:49:12 | 2021-04-29T13:59:47 | 2021-04-29T13:59:47 | bhavitvyamalik | [] | `EM`, `F1`, `AUPR`, `Precision@80%Recall`, and `Precision@90%Recall` metrics supported for CUAD | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.