id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
โŒ€
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
โŒ€
is_pull_request
bool
2 classes
1,073,603,508
https://api.github.com/repos/huggingface/datasets/issues/3401
https://github.com/huggingface/datasets/issues/3401
3,401
Add Wikimedia pre-processed datasets
closed
1
2021-12-07T17:33:19
2024-10-09T16:10:47
2024-10-09T16:10:47
albertvillanova
[ "dataset request" ]
## Adding a Dataset - **Name:** Add pre-processed data to: - *wikimedia/wikipedia*: https://huggingface.co/datasets/wikimedia/wikipedia - *wikimedia/wikisource*: https://huggingface.co/datasets/wikimedia/wikisource - **Description:** Add pre-processed data to the Hub for all languages - **Paper:** *link to the...
false
1,073,600,382
https://api.github.com/repos/huggingface/datasets/issues/3400
https://github.com/huggingface/datasets/issues/3400
3,400
Improve Wikipedia loading script
closed
2
2021-12-07T17:29:25
2022-03-22T16:52:28
2022-03-22T16:52:28
albertvillanova
[ "dataset request" ]
As reported by @geohci, the "wikipedia" processing/loading script could be improved by some additional small suggested processing functions: - _extract_content(filepath): - Replace .startswith("#redirect") with more structured approach: if elem.find(f"./{namespace}redirect") is None: continue - _parse_and_clean_wi...
false
1,073,593,861
https://api.github.com/repos/huggingface/datasets/issues/3399
https://github.com/huggingface/datasets/issues/3399
3,399
Add Wikisource dataset
closed
2
2021-12-07T17:21:31
2024-10-09T16:11:27
2024-10-09T16:11:26
albertvillanova
[ "dataset request" ]
## Adding a Dataset - **Name:** *wikisource* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** Additional high quality textual d...
false
1,073,590,384
https://api.github.com/repos/huggingface/datasets/issues/3398
https://github.com/huggingface/datasets/issues/3398
3,398
Add URL field to Wikimedia dataset instances: wikipedia,...
closed
5
2021-12-07T17:17:27
2022-03-22T16:53:27
2022-03-22T16:53:27
albertvillanova
[ "dataset request" ]
As reported by @geohci, in order to host pre-processed data in the Hub, we should add the full URL to data instances (new field "url"), so that we conform to proper attribution from license requirement. See, e.g.: https://fair-trec.github.io/docs/Fair_Ranking_2021_Participant_Instructions.pdf#subsection.3.2 This sho...
false
1,073,502,444
https://api.github.com/repos/huggingface/datasets/issues/3397
https://github.com/huggingface/datasets/pull/3397
3,397
add BNL newspapers
closed
9
2021-12-07T15:43:21
2022-01-17T18:35:34
2022-01-17T18:35:34
davanstrien
[]
This pull request adds the BNL's [processed newspaper collections](https://data.bnl.lu/data/historical-newspapers/) as a dataset. This is partly done to support BigScience see: https://github.com/bigscience-workshop/data_tooling/issues/192. The Datacard is more sparse than I would like but I plan to make a separate...
true
1,073,467,183
https://api.github.com/repos/huggingface/datasets/issues/3396
https://github.com/huggingface/datasets/issues/3396
3,396
Install Audio dependencies to support audio decoding
closed
5
2021-12-07T15:11:36
2022-04-25T16:12:22
2022-04-25T16:12:01
albertvillanova
[ "dataset-viewer", "audio_column" ]
## Dataset viewer issue for '*openslr*', '*projecte-aina/parlament_parla*' **Link:** *https://huggingface.co/datasets/openslr* **Link:** *https://huggingface.co/datasets/projecte-aina/parlament_parla* Error: ``` Status code: 400 Exception: ImportError Message: To support decoding audio files, ple...
false
1,073,432,650
https://api.github.com/repos/huggingface/datasets/issues/3395
https://github.com/huggingface/datasets/pull/3395
3,395
Fix formatting in IterableDataset.map docs
closed
0
2021-12-07T14:41:01
2021-12-08T10:11:33
2021-12-08T10:11:33
mariosasko
[]
Fix formatting in the recently added `Map` section of the streaming docs.
true
1,073,396,308
https://api.github.com/repos/huggingface/datasets/issues/3394
https://github.com/huggingface/datasets/issues/3394
3,394
Preserve all feature types when saving a dataset on the Hub with `push_to_hub`
closed
2
2021-12-07T14:08:30
2021-12-21T17:00:09
2021-12-21T17:00:09
mariosasko
[ "bug" ]
Currently, if one of the dataset features is of type `ClassLabel`, saving the dataset with `push_to_hub` and reloading the dataset with `load_dataset` will return the feature of type `Value`. To fix this, we should do something similar to `save_to_disk` (which correctly preserves the types) and not only push the parque...
false
1,073,189,777
https://api.github.com/repos/huggingface/datasets/issues/3393
https://github.com/huggingface/datasets/issues/3393
3,393
Common Voice Belarusian Dataset
open
0
2021-12-07T10:37:02
2021-12-09T15:56:03
null
wiedymi
[ "dataset request", "speech" ]
## Adding a Dataset - **Name:** *Common Voice Belarusian Dataset* - **Description:** *[commonvoice.mozilla.org/be](https://commonvoice.mozilla.org/be)* - **Data:** *[commonvoice.mozilla.org/be/datasets](https://commonvoice.mozilla.org/be/datasets)* - **Motivation:** *It has more than 7GB of data, so it will be grea...
false
1,073,073,408
https://api.github.com/repos/huggingface/datasets/issues/3392
https://github.com/huggingface/datasets/issues/3392
3,392
Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
closed
1
2021-12-07T08:41:01
2021-12-07T14:04:28
2021-12-07T14:04:28
severo
[ "dataset-viewer" ]
## Dataset viewer issue for `dansbecker/hackernews_hiring_posts` **Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts *short description of the issue* Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-data...
false
1,072,849,055
https://api.github.com/repos/huggingface/datasets/issues/3391
https://github.com/huggingface/datasets/issues/3391
3,391
method to select columns
closed
1
2021-12-07T02:44:19
2021-12-07T02:45:27
2021-12-07T02:45:27
changjonathanc
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** * There is currently no way to select some columns of a dataset. In pandas, one can use `df[['col1', 'col2']]` to select columns, but in `datasets`, it results in error. **Describe the solution you'd like** * A new method that can be used to cr...
false
1,072,462,456
https://api.github.com/repos/huggingface/datasets/issues/3390
https://github.com/huggingface/datasets/issues/3390
3,390
Loading dataset throws "KeyError: 'Field "builder_name" does not exist in table schema'"
closed
1
2021-12-06T18:22:49
2021-12-06T20:22:05
2021-12-06T20:22:05
R4ZZ3
[ "bug" ]
## Describe the bug I have prepared dataset to datasets and now I am trying to load it back Finnish-NLP/voxpopuli_fi I get "KeyError: 'Field "builder_name" does not exist in table schema'" My dataset folder and files should be like @patrickvonplaten has here https://huggingface.co/datasets/flax-community/german-c...
false
1,072,191,865
https://api.github.com/repos/huggingface/datasets/issues/3389
https://github.com/huggingface/datasets/issues/3389
3,389
Add EDGAR
open
2
2021-12-06T14:06:11
2022-10-05T10:40:22
null
philschmid
[ "dataset request" ]
## Adding a Dataset - **Name:** EDGAR Database - **Description:** https://www.sec.gov/edgar/about EDGAR, the Electronic Data Gathering, Analysis, and Retrieval system, is the primary system for companies and others submitting documents under the Securities Act of 1933, the Securities Exchange Act of 1934, the Trust I...
false
1,072,022,021
https://api.github.com/repos/huggingface/datasets/issues/3388
https://github.com/huggingface/datasets/pull/3388
3,388
Fix flaky test of the temporary directory used by load_from_disk
closed
1
2021-12-06T11:09:31
2021-12-06T11:25:03
2021-12-06T11:24:49
lhoestq
[]
The test is flaky, here is an example of random CI failure: https://github.com/huggingface/datasets/commit/73ed6615b4b3eb74d5311684f7b9e05cdb76c989 I fixed that by not checking the content of the random part of the temporary directory name
true
1,071,836,456
https://api.github.com/repos/huggingface/datasets/issues/3387
https://github.com/huggingface/datasets/pull/3387
3,387
Create Language Modeling task
closed
0
2021-12-06T07:56:07
2021-12-17T17:18:28
2021-12-17T17:18:27
albertvillanova
[]
Create Language Modeling task to be able to specify the input "text" column in a dataset. This can be useful for datasets which are not exclusively used for language modeling and have more than one column: - for text classification datasets (with columns "review" and "rating", for example), the Language Modeling ta...
true
1,071,813,141
https://api.github.com/repos/huggingface/datasets/issues/3386
https://github.com/huggingface/datasets/pull/3386
3,386
Fix typos in dataset cards
closed
0
2021-12-06T07:20:40
2021-12-06T09:30:55
2021-12-06T09:30:54
albertvillanova
[]
This PR: - Fix typos in dataset cards - Fix Papers With Code ID for: - Bilingual Corpus of Arabic-English Parallel Tweets - Tweets Hate Speech Detection - Add pretty name tags
true
1,071,742,310
https://api.github.com/repos/huggingface/datasets/issues/3385
https://github.com/huggingface/datasets/issues/3385
3,385
None batched `with_transform`, `set_transform`
open
3
2021-12-06T05:20:54
2022-01-17T15:25:01
null
changjonathanc
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** A `torch.utils.data.Dataset.__getitem__` operates on a single example. But ๐Ÿค— `Datasets.with_transform` doesn't seem to allow non-batched transform. **Describe the solution you'd like** Have a `batched=True` argument in `Datasets.with_transfor...
false
1,071,594,165
https://api.github.com/repos/huggingface/datasets/issues/3384
https://github.com/huggingface/datasets/pull/3384
3,384
Adding mMARCO dataset
closed
0
2021-12-05T23:59:11
2021-12-12T15:27:36
2021-12-12T15:27:36
lhbonifacio
[]
We are adding mMARCO dataset to HuggingFace datasets repo. This way, all the languages covered in the translation are available in a easy way.
true
1,071,551,884
https://api.github.com/repos/huggingface/datasets/issues/3383
https://github.com/huggingface/datasets/pull/3383
3,383
add Georgian data in cc100.
closed
0
2021-12-05T20:38:09
2021-12-14T14:37:23
2021-12-14T14:37:22
AnzorGozalishvili
[]
update cc100 dataset to support loading Georgian (ka) data which is originally available in CC100 dataset source. All tests are passed. Dummy data generated. metadata generated.
true
1,071,293,299
https://api.github.com/repos/huggingface/datasets/issues/3382
https://github.com/huggingface/datasets/pull/3382
3,382
#3337 Add typing overloads to Dataset.__getitem__ for mypy
closed
2
2021-12-04T20:54:49
2021-12-14T10:28:55
2021-12-14T10:28:55
Dref360
[]
Add typing overloads to Dataset.__getitem__ for mypy Fixes #3337 **Iterable** Iterable from `collections` cannot have a type, so you can't do `Iterable[int]` for example. `typing` has a Generic version that builds upon the one from `collections`. **Flake8** I had to add `# noqa: F811`, this is a bug from Fl...
true
1,071,283,879
https://api.github.com/repos/huggingface/datasets/issues/3381
https://github.com/huggingface/datasets/issues/3381
3,381
Unable to load audio_features from common_voice dataset
closed
3
2021-12-04T19:59:11
2021-12-06T17:52:42
2021-12-06T17:52:42
ashu5644
[ "bug" ]
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def spe...
false
1,071,166,270
https://api.github.com/repos/huggingface/datasets/issues/3380
https://github.com/huggingface/datasets/issues/3380
3,380
[Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem!
closed
0
2021-12-04T09:18:33
2022-01-11T12:29:53
2022-01-11T12:29:53
LysandreJik
[]
Thanks to all of you, `datasets` will pass 11.5k stars :star2: this week! If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts: [**hf.co/oss-survey**](https://hf.co/oss-survey) (please reply in the above feedback form rather than to this th...
false
1,071,079,146
https://api.github.com/repos/huggingface/datasets/issues/3379
https://github.com/huggingface/datasets/pull/3379
3,379
iter_archive on zipfiles with better compression type check
closed
10
2021-12-04T01:04:48
2023-01-24T13:00:19
2023-01-24T12:53:08
Mehdi2402
[]
Hello @lhoestq , thank you for your detailed answer on previous PR ! I made this new PR because I misused git on the previous one #3347. Related issue #3272. # Comments : * For extension check I used the `_get_extraction_protocol` function in **download_manager.py** with a slight change and called it `_get_e...
true
1,070,580,126
https://api.github.com/repos/huggingface/datasets/issues/3378
https://github.com/huggingface/datasets/pull/3378
3,378
Add The Pile subsets
closed
0
2021-12-03T13:14:54
2021-12-09T18:11:25
2021-12-09T18:11:23
albertvillanova
[]
Add The Pile subsets: - pubmed - ubuntu_irc - europarl - hacker_news - nih_exporter Close bigscience-workshop/data_tooling#301. CC: @StellaAthena
true
1,070,562,907
https://api.github.com/repos/huggingface/datasets/issues/3377
https://github.com/huggingface/datasets/pull/3377
3,377
COCO ๐Ÿฅฅ on the ๐Ÿค— Hub?
closed
4
2021-12-03T12:55:27
2021-12-20T14:14:01
2021-12-20T14:14:00
merveenoyan
[]
This is a draft PR since I ran into few small problems. I referred to this TFDS code: https://github.com/tensorflow/datasets/blob/2538a08c184d53b37bfcf52cc21dd382572a88f4/tensorflow_datasets/object_detection/coco.py cc: @mariosasko
true
1,070,522,979
https://api.github.com/repos/huggingface/datasets/issues/3376
https://github.com/huggingface/datasets/pull/3376
3,376
Update clue benchmark
closed
1
2021-12-03T12:06:01
2021-12-08T14:14:42
2021-12-08T14:14:41
mariosasko
[]
Fix #3374
true
1,070,454,913
https://api.github.com/repos/huggingface/datasets/issues/3375
https://github.com/huggingface/datasets/pull/3375
3,375
Support streaming zipped dataset repo by passing only repo name
closed
6
2021-12-03T10:43:05
2021-12-16T18:03:32
2021-12-16T18:03:31
albertvillanova
[]
Proposed solution: - I have added the method `iter_files` to DownloadManager and StreamingDownloadManager - I use this in modules: "csv", "json", "text" - I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes Fix #3373.
true
1,070,426,462
https://api.github.com/repos/huggingface/datasets/issues/3374
https://github.com/huggingface/datasets/issues/3374
3,374
NonMatchingChecksumError for the CLUE:cluewsc2020, chid, c3 and tnews
closed
2
2021-12-03T10:10:54
2021-12-08T14:14:41
2021-12-08T14:14:41
Namco0816
[]
Hi, it seems like there are updates in cluewsc2020, chid, c3 and tnews, since i could not load them due to the checksum error.
false
1,070,406,391
https://api.github.com/repos/huggingface/datasets/issues/3373
https://github.com/huggingface/datasets/issues/3373
3,373
Support streaming zipped CSV dataset repo by passing only repo name
closed
0
2021-12-03T09:48:24
2021-12-16T18:03:31
2021-12-16T18:03:31
albertvillanova
[ "enhancement" ]
Given a community ๐Ÿค— dataset repository containing only a zipped CSV file (only raw data, no loading script), I would like to load it in streaming mode without passing `data_files`: ``` ds_name = "bigscience-catalogue-data/vietnamese_poetry_from_fsoft_ai_lab" ds = load_dataset(ds_name, split="train", streaming=True,...
false
1,069,948,178
https://api.github.com/repos/huggingface/datasets/issues/3372
https://github.com/huggingface/datasets/issues/3372
3,372
[SEO improvement] Add Dataset Metadata to make datasets indexable
closed
0
2021-12-02T20:21:07
2022-03-18T09:36:48
2022-03-18T09:36:48
cakiki
[ "enhancement" ]
Some people who host datasets on github seem to include a table of metadata at the end of their README.md to make the dataset indexable by [Google Dataset Search](https://datasetsearch.research.google.com/) (See [here](https://github.com/google-research/google-research/tree/master/goemotions#dataset-metadata) and [here...
false
1,069,821,335
https://api.github.com/repos/huggingface/datasets/issues/3371
https://github.com/huggingface/datasets/pull/3371
3,371
New: Americas NLI dataset
closed
0
2021-12-02T17:44:59
2021-12-08T13:58:12
2021-12-08T13:58:11
fdschmidt93
[]
This PR adds the [Americas NLI](https://arxiv.org/abs/2104.08726) dataset, extension of XNLI to 10 low-resource indigenous languages spoken in the Americas: Ashaninka, Aymara, Bribri, Guarani, Nahuatl, Otomi, Quechua, Raramuri, Shipibo-Konibo, and Wixarika. One odd thing (not sure) is that I had to set `datasets-...
true
1,069,735,423
https://api.github.com/repos/huggingface/datasets/issues/3370
https://github.com/huggingface/datasets/pull/3370
3,370
Document a training loop for streaming dataset
closed
0
2021-12-02T16:17:00
2021-12-03T13:34:35
2021-12-03T13:34:34
lhoestq
[]
I added some docs about streaming dataset. In particular I added two subsections: - one on how to use `map` for preprocessing - one on how to use a streaming dataset in a pytorch training loop cc @patrickvonplaten @stevhliu if you have some comments cc @Rocketknight1 later we can add the one for TF and I might ne...
true
1,069,587,674
https://api.github.com/repos/huggingface/datasets/issues/3369
https://github.com/huggingface/datasets/issues/3369
3,369
[Audio] Allow resampling for audio datasets in streaming mode
closed
2
2021-12-02T14:04:57
2021-12-16T15:55:19
2021-12-16T15:55:19
patrickvonplaten
[ "enhancement" ]
Many audio datasets like Common Voice always need to be resampled. This can very easily be done in non-streaming mode as follows: ```python from datasets import load_dataset ds = load_dataset("common_voice", "ab", split="test") ds = ds.cast_column("audio", Audio(sampling_rate=16_000)) ``` However in strea...
false
1,069,403,624
https://api.github.com/repos/huggingface/datasets/issues/3368
https://github.com/huggingface/datasets/pull/3368
3,368
Fix dict source_datasets tagset validator
closed
0
2021-12-02T10:52:20
2021-12-02T15:48:38
2021-12-02T15:48:37
albertvillanova
[]
Currently, the `source_datasets` tag validation does not support passing a dict with configuration keys. This PR: - Extends `tagset_validator` to support regex tags - Uses `tagset_validator` to validate dict `source_datasets`
true
1,069,241,274
https://api.github.com/repos/huggingface/datasets/issues/3367
https://github.com/huggingface/datasets/pull/3367
3,367
Fix typo in other-structured-to-text task tag
closed
0
2021-12-02T08:02:27
2021-12-02T16:07:14
2021-12-02T16:07:13
albertvillanova
[]
Fix typo in task tag: - `other-stuctured-to-text` (before) - `other-structured-to-text` (now)
true
1,069,214,022
https://api.github.com/repos/huggingface/datasets/issues/3366
https://github.com/huggingface/datasets/issues/3366
3,366
Add multimodal datasets
open
0
2021-12-02T07:24:04
2023-02-28T16:29:22
null
albertvillanova
[ "dataset request" ]
Epic issue to track the addition of multimodal datasets: - [ ] #2526 - [x] #1842 - [ ] #1810 Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). @VictorSanh feel free to add and sort by priority any interesting dataset. I have added the...
false
1,069,195,887
https://api.github.com/repos/huggingface/datasets/issues/3365
https://github.com/huggingface/datasets/issues/3365
3,365
Add task tags for multimodal datasets
closed
1
2021-12-02T06:58:20
2023-07-25T18:21:33
2023-07-25T18:21:32
albertvillanova
[ "enhancement" ]
## **Is your feature request related to a problem? Please describe.** Currently, task tags are either exclusively related to text or speech processing: - https://github.com/huggingface/datasets/blob/master/src/datasets/utils/resources/tasks.json ## **Describe the solution you'd like** We should also add tasks...
false
1,068,851,196
https://api.github.com/repos/huggingface/datasets/issues/3364
https://github.com/huggingface/datasets/pull/3364
3,364
Use the Audio feature in the AutomaticSpeechRecognition template
closed
4
2021-12-01T20:42:26
2022-03-24T14:34:09
2022-03-24T14:34:08
anton-l
[]
This updates the ASR template and all supported datasets to use the `Audio` feature
true
1,068,824,340
https://api.github.com/repos/huggingface/datasets/issues/3363
https://github.com/huggingface/datasets/pull/3363
3,363
Update URL of Jeopardy! dataset
closed
2
2021-12-01T20:08:10
2022-10-06T13:45:49
2021-12-03T12:35:01
mariosasko
[]
Updates the URL of the Jeopardy! dataset. Fix #3361
true
1,068,809,768
https://api.github.com/repos/huggingface/datasets/issues/3362
https://github.com/huggingface/datasets/pull/3362
3,362
Adapt image datasets
closed
3
2021-12-01T19:52:01
2021-12-09T18:37:42
2021-12-09T18:37:41
mariosasko
[]
This PR: * adapts the ImageClassification template to use the new Image feature * adapts the following datasets to use the new Image feature: * beans (+ fixes streaming) * cast_vs_dogs (+ fixes streaming) * cifar10 * cifar100 * fashion_mnist * mnist * head_qa cc @nateraw
true
1,068,736,268
https://api.github.com/repos/huggingface/datasets/issues/3361
https://github.com/huggingface/datasets/issues/3361
3,361
Jeopardy _URL access denied
closed
1
2021-12-01T18:21:33
2021-12-11T12:50:23
2021-12-06T11:16:31
tianjianjiang
[ "bug" ]
## Describe the bug http://skeeto.s3.amazonaws.com/share/JEOPARDY_QUESTIONS1.json.gz returns Access Denied now. However, https://drive.google.com/file/d/0BwT5wj_P7BKXb2hfM3d2RHU1ckE/view?usp=sharing from the original Reddit post https://www.reddit.com/r/datasets/comments/1uyd0t/200000_jeopardy_questions_in_a_json_f...
false
1,068,724,697
https://api.github.com/repos/huggingface/datasets/issues/3360
https://github.com/huggingface/datasets/pull/3360
3,360
Add The Pile USPTO subset
closed
0
2021-12-01T18:08:05
2021-12-03T11:45:29
2021-12-03T11:45:28
albertvillanova
[]
Add: - USPTO subset of The Pile: "uspto" config Close bigscience-workshop/data_tooling#297. CC: @StellaAthena
true
1,068,638,213
https://api.github.com/repos/huggingface/datasets/issues/3359
https://github.com/huggingface/datasets/pull/3359
3,359
Add The Pile Free Law subset
closed
3
2021-12-01T16:46:04
2021-12-06T10:12:17
2021-12-01T17:30:44
albertvillanova
[]
Add: - Free Law subset of The Pile: "free_law" config Close bigscience-workshop/data_tooling#75. CC: @StellaAthena
true
1,068,623,216
https://api.github.com/repos/huggingface/datasets/issues/3358
https://github.com/huggingface/datasets/issues/3358
3,358
add new field, and get errors
closed
2
2021-12-01T16:35:38
2021-12-02T02:26:22
2021-12-02T02:26:22
PatricYan
[]
after adding new field **tokenized_examples["example_id"]**, and get errors below, I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list **all fields** ``` ***************** train_dataset 1: Dataset({ features: ['attention_mask', 'end_positions', 'example_id', '...
false
1,068,607,382
https://api.github.com/repos/huggingface/datasets/issues/3357
https://github.com/huggingface/datasets/pull/3357
3,357
Update languages in aeslc dataset card
closed
0
2021-12-01T16:20:46
2022-09-23T13:16:49
2022-09-23T13:16:49
apergo-ai
[ "dataset contribution" ]
After having worked a bit with the dataset. As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate).
true
1,068,503,932
https://api.github.com/repos/huggingface/datasets/issues/3356
https://github.com/huggingface/datasets/pull/3356
3,356
to_tf_dataset() refactor
closed
5
2021-12-01T14:54:30
2021-12-09T10:26:53
2021-12-09T10:26:53
Rocketknight1
[]
This is the promised cleanup to `to_tf_dataset()` now that the course is out of the way! The main changes are: - A collator is always required (there was way too much hackiness making things like labels work without it) - Lots of cleanup and a lot of code moved to `_get_output_signature` - Should now handle it gra...
true
1,068,468,573
https://api.github.com/repos/huggingface/datasets/issues/3355
https://github.com/huggingface/datasets/pull/3355
3,355
Extend support for streaming datasets that use pd.read_excel
closed
1
2021-12-01T14:22:43
2021-12-17T07:24:19
2021-12-17T07:24:18
albertvillanova
[]
This PR fixes error: ``` ValueError: Cannot seek streaming HTTP file ``` CC: @severo
true
1,068,307,271
https://api.github.com/repos/huggingface/datasets/issues/3354
https://github.com/huggingface/datasets/pull/3354
3,354
Remove duplicate name from dataset cards
closed
0
2021-12-01T11:45:40
2021-12-01T13:14:30
2021-12-01T13:14:29
albertvillanova
[]
Remove duplicate name from dataset card for: - ajgt_twitter_ar - emotone_ar
true
1,068,173,783
https://api.github.com/repos/huggingface/datasets/issues/3353
https://github.com/huggingface/datasets/issues/3353
3,353
add one field "example_id", but I can't see it in the "comput_loss" function
closed
7
2021-12-01T09:35:09
2021-12-01T16:02:39
2021-12-01T16:02:39
PatricYan
[]
Hi, I add one field **example_id**, but I can't see it in the **comput_loss** function, how can I do this? below is the information of inputs ``` *********************** inputs: {'attention_mask': tensor([[1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], [1, 1, 1, ..., 0, 0, 0], ..., ...
false
1,068,102,994
https://api.github.com/repos/huggingface/datasets/issues/3352
https://github.com/huggingface/datasets/pull/3352
3,352
Make LABR dataset streamable
closed
0
2021-12-01T08:22:27
2021-12-01T10:49:02
2021-12-01T10:49:01
albertvillanova
[]
Fix LABR dataset to make it streamable. Related to: #3350.
true
1,068,094,873
https://api.github.com/repos/huggingface/datasets/issues/3351
https://github.com/huggingface/datasets/pull/3351
3,351
Add VCTK dataset
closed
9
2021-12-01T08:13:17
2022-02-28T09:22:03
2021-12-28T15:05:08
jaketae
[]
Fixes #1837.
true
1,068,078,160
https://api.github.com/repos/huggingface/datasets/issues/3350
https://github.com/huggingface/datasets/pull/3350
3,350
Avoid content-encoding issue while streaming datasets
closed
0
2021-12-01T07:56:48
2021-12-01T08:15:01
2021-12-01T08:15:00
albertvillanova
[]
This PR will fix streaming of datasets served with gzip content-encoding: ``` ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` Fix #2918. CC: @severo
true
1,067,853,601
https://api.github.com/repos/huggingface/datasets/issues/3349
https://github.com/huggingface/datasets/pull/3349
3,349
raise exception instead of using assertions.
closed
6
2021-12-01T01:37:51
2021-12-20T16:07:27
2021-12-20T16:07:27
manisnesan
[]
fix for the remaining files https://github.com/huggingface/datasets/issues/3171
true
1,067,831,113
https://api.github.com/repos/huggingface/datasets/issues/3348
https://github.com/huggingface/datasets/pull/3348
3,348
BLEURT: Match key names to correspond with filename
closed
3
2021-12-01T01:01:18
2021-12-07T16:06:57
2021-12-07T16:06:57
jaehlee
[]
In order to properly locate downloaded ckpt files key name needs to match filename. Correcting change introduced in #3235
true
1,067,738,902
https://api.github.com/repos/huggingface/datasets/issues/3347
https://github.com/huggingface/datasets/pull/3347
3,347
iter_archive for zip files
closed
1
2021-11-30T22:34:17
2021-12-04T00:22:22
2021-12-04T00:22:11
Mehdi2402
[]
* In this PR, I added the option to iterate through zipfiles for `download_manager.py` only. * Next PR will be the same applied to `streaming_download_manager.py`. * Related issue #3272. ## Comments : * There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead ...
true
1,067,632,365
https://api.github.com/repos/huggingface/datasets/issues/3346
https://github.com/huggingface/datasets/issues/3346
3,346
Failed to convert `string` with pyarrow for QED since 1.15.0
closed
2
2021-11-30T20:11:42
2021-12-14T14:39:05
2021-12-14T14:39:05
tianjianjiang
[ "bug" ]
## Describe the bug Loading QED was fine until 1.15.0. related: bigscience-workshop/promptsource#659, bigscience-workshop/promptsource#670 Not sure where the root cause is, but here are some candidates: - #3158 - #3120 - #3196 - #2891 ## Steps to reproduce the bug ```python load_dataset("qed") ``` ## ...
false
1,067,622,951
https://api.github.com/repos/huggingface/datasets/issues/3345
https://github.com/huggingface/datasets/issues/3345
3,345
Failed to download species_800 from Google Drive zip file
closed
3
2021-11-30T20:00:28
2021-12-01T17:53:15
2021-12-01T17:53:15
tianjianjiang
[ "bug" ]
## Describe the bug One can manually download the zip file on Google Drive, but `load_dataset()` cannot. related: #3248 ## Steps to reproduce the bug ```shell > python Python 3.7.12 (default, Sep 5 2021, 08:34:29) [Clang 11.0.3 (clang-1103.0.32.62)] on darwin Type "help", "copyright", "credits" or "license" ...
false
1,067,567,603
https://api.github.com/repos/huggingface/datasets/issues/3344
https://github.com/huggingface/datasets/pull/3344
3,344
Add ArrayXD docs
closed
0
2021-11-30T18:53:31
2021-12-01T20:16:03
2021-12-01T19:35:32
stevhliu
[]
Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general. Let me know if I'm missing anything @lhoestq :)
true
1,067,505,507
https://api.github.com/repos/huggingface/datasets/issues/3343
https://github.com/huggingface/datasets/pull/3343
3,343
Better error message when download fails
closed
0
2021-11-30T17:38:50
2021-12-01T11:27:59
2021-12-01T11:27:58
lhoestq
[]
From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails. In particular the error now shows: - the error from the HEAD request if there's one - otherwise the response code of the ...
true
1,067,481,390
https://api.github.com/repos/huggingface/datasets/issues/3342
https://github.com/huggingface/datasets/pull/3342
3,342
Fix ASSET dataset data URLs
closed
1
2021-11-30T17:13:30
2021-12-14T14:50:00
2021-12-14T14:50:00
tianjianjiang
[]
Change the branch name "master" to "main" in the data URLs, since facebookresearch has changed that.
true
1,067,449,569
https://api.github.com/repos/huggingface/datasets/issues/3341
https://github.com/huggingface/datasets/issues/3341
3,341
Mirror the canonical datasets to the Hugging Face Hub
closed
2
2021-11-30T16:42:05
2022-01-26T14:47:37
2022-01-26T14:47:37
severo
[ "enhancement" ]
- [ ] create a repo on https://hf.co/datasets for every canonical dataset - [ ] on every commit related to a dataset, update the hf.co repo See https://github.com/huggingface/moon-landing/pull/1562 @SBrandeis: I let you edit this description if needed to precise the intent.
false
1,067,292,636
https://api.github.com/repos/huggingface/datasets/issues/3340
https://github.com/huggingface/datasets/pull/3340
3,340
Fix JSON ClassLabel casting for integers
closed
0
2021-11-30T14:19:54
2021-12-01T11:27:30
2021-12-01T11:27:30
lhoestq
[]
Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already. For example this currently fails: ```python from datasets import load_dataset, Feature...
true
1,066,662,477
https://api.github.com/repos/huggingface/datasets/issues/3339
https://github.com/huggingface/datasets/issues/3339
3,339
to_tf_dataset fails on TPU
open
5
2021-11-30T00:50:52
2021-12-02T14:21:27
null
nbroad1881
[ "bug" ]
Using `to_tf_dataset` to create a dataset and then putting it in `model.fit` results in an internal error on TPUs. I've only tried on Colab and Kaggle TPUs, not GCP TPUs. ## Steps to reproduce the bug I made a colab to show the error. https://colab.research.google.com/drive/12x_PFKzGouFxqD4OuWfnycW_1TaT276z?usp=s...
false
1,066,371,235
https://api.github.com/repos/huggingface/datasets/issues/3338
https://github.com/huggingface/datasets/pull/3338
3,338
[WIP] Add doctests for tutorials
closed
1
2021-11-29T18:40:46
2023-05-05T17:18:20
2023-05-05T17:18:15
stevhliu
[]
Opening a PR as discussed with @LysandreJik for some help with doctest issues. The goal is to add doctests for each of the tutorials in the documentation to make sure the code samples work as shown. ### Issues A doctest has been added in the docstring of the `load_dataset_builder` function in `load.py` to handle ...
true
1,066,232,936
https://api.github.com/repos/huggingface/datasets/issues/3337
https://github.com/huggingface/datasets/issues/3337
3,337
Typing of Dataset.__getitem__ could be improved.
closed
2
2021-11-29T16:20:11
2021-12-14T10:28:54
2021-12-14T10:28:54
Dref360
[ "bug" ]
## Describe the bug The newly added typing for Dataset.__getitem__ is Union[Dict, List]. This makes tools like mypy a bit awkward to use as we need to check the type manually. We could use type overloading to make this easier. [Documentation](https://docs.python.org/3/library/typing.html#typing.overload) ## Steps...
false
1,066,208,436
https://api.github.com/repos/huggingface/datasets/issues/3336
https://github.com/huggingface/datasets/pull/3336
3,336
Add support for multiple dynamic dimensions and to_pandas conversion for dynamic arrays
closed
0
2021-11-29T15:58:59
2023-09-24T09:53:52
2023-05-16T18:24:46
mariosasko
[]
Add support for multiple dynamic dimensions (e.g. `(None, None, 3)` for arbitrary sized images) and `to_pandas()` conversion for dynamic arrays. TODOs: * [ ] Cleaner code * [ ] Formatting issues (if NumPy doesn't allow broadcasting even though dtype is np.object) * [ ] Fix some issues with zero-dim tensors * [ ...
true
1,066,064,126
https://api.github.com/repos/huggingface/datasets/issues/3335
https://github.com/huggingface/datasets/pull/3335
3,335
add Speech commands dataset
closed
11
2021-11-29T13:52:47
2021-12-10T10:37:21
2021-12-10T10:30:15
polinaeterna
[]
closes #3283
true
1,065,983,923
https://api.github.com/repos/huggingface/datasets/issues/3334
https://github.com/huggingface/datasets/issues/3334
3,334
Integrate Polars library
closed
8
2021-11-29T12:31:54
2024-08-31T05:31:28
2024-08-31T05:31:27
albertvillanova
[ "enhancement" ]
Check potential integration of the Polars library: https://github.com/pola-rs/polars - Benchmark: https://h2oai.github.io/db-benchmark/ CC: @thomwolf @lewtun
false
1,065,346,919
https://api.github.com/repos/huggingface/datasets/issues/3333
https://github.com/huggingface/datasets/issues/3333
3,333
load JSON files, get the errors
closed
12
2021-11-28T14:29:58
2021-12-01T09:34:31
2021-12-01T03:57:48
PatricYan
[]
Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command `!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/` change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html `dataset = ...
false
1,065,345,853
https://api.github.com/repos/huggingface/datasets/issues/3332
https://github.com/huggingface/datasets/pull/3332
3,332
Fix error message and add extension fallback
closed
0
2021-11-28T14:25:29
2021-11-29T13:34:15
2021-11-29T13:34:14
mariosasko
[]
Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust. In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix orderi...
true
1,065,275,896
https://api.github.com/repos/huggingface/datasets/issues/3331
https://github.com/huggingface/datasets/issues/3331
3,331
AttributeError: 'CommunityDatasetModuleFactoryWithoutScript' object has no attribute 'path'
closed
1
2021-11-28T08:54:05
2021-11-29T13:49:44
2021-11-29T13:34:14
luozhouyang
[ "bug" ]
## Describe the bug I add a new question answering dataset to huggingface datasets manually. Here is the link: [luozhouyang/question-answering-datasets](https://huggingface.co/datasets/luozhouyang/question-answering-datasets) But when I load the dataset, an error raised: ```bash AttributeError: 'CommunityDatas...
false
1,065,176,619
https://api.github.com/repos/huggingface/datasets/issues/3330
https://github.com/huggingface/datasets/pull/3330
3,330
Change TriviaQA license (#3313)
closed
0
2021-11-28T03:26:45
2021-11-29T11:24:21
2021-11-29T11:24:21
avinashsai
[]
Fixes (#3313)
true
1,065,096,971
https://api.github.com/repos/huggingface/datasets/issues/3329
https://github.com/huggingface/datasets/issues/3329
3,329
Map function: Type error on iter #999
closed
4
2021-11-27T17:53:05
2021-11-29T20:40:15
2021-11-29T20:40:15
josephkready666
[ "bug" ]
## Describe the bug Using the map function, it throws a type error on iter #999 Here is the code I am calling: ``` dataset = datasets.load_dataset('squad') dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'}) ``` text_numbers_to_int returns the input text ...
false
1,065,015,262
https://api.github.com/repos/huggingface/datasets/issues/3328
https://github.com/huggingface/datasets/pull/3328
3,328
Quick fix error formatting
closed
0
2021-11-27T11:47:48
2021-11-29T13:32:42
2021-11-29T13:32:42
NouamaneTazi
[]
While working on a dataset, I got the error ``` TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`. ``...
true
1,064,675,888
https://api.github.com/repos/huggingface/datasets/issues/3327
https://github.com/huggingface/datasets/issues/3327
3,327
"Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)"
closed
1
2021-11-26T16:26:36
2021-11-26T16:44:11
2021-11-26T16:44:11
eliasws
[ "bug" ]
## Describe the bug Passing a correctly shaped Numpy-Array to get_nearest_examples leads to the Exception "Shape of query is incorrect, it has to be either a 1D array or 2D (1, N)" Probably the reason for this is a wrongly converted assertion. 1.15.1: `assert len(query.shape) == 1 or (len(query.shape) == 2...
false
1,064,664,479
https://api.github.com/repos/huggingface/datasets/issues/3326
https://github.com/huggingface/datasets/pull/3326
3,326
Fix import `datasets` on python 3.10
closed
0
2021-11-26T16:10:00
2021-11-26T16:31:23
2021-11-26T16:31:23
lhoestq
[]
In python 3.10 it's no longer possible to use `functools.wraps` on a method decorated with `classmethod`. To fix this I inverted the order of the `inject_arrow_table_documentation` and `classmethod` decorators Fix #3324
true
1,064,663,075
https://api.github.com/repos/huggingface/datasets/issues/3325
https://github.com/huggingface/datasets/pull/3325
3,325
Update conda dependencies
closed
0
2021-11-26T16:08:07
2021-11-26T16:20:37
2021-11-26T16:20:36
lhoestq
[]
Some dependencies minimum versions were outdated. For example `pyarrow` and `huggingface_hub`
true
1,064,661,212
https://api.github.com/repos/huggingface/datasets/issues/3324
https://github.com/huggingface/datasets/issues/3324
3,324
Can't import `datasets` in python 3.10
closed
0
2021-11-26T16:06:14
2021-11-26T16:31:23
2021-11-26T16:31:23
lhoestq
[]
When importing `datasets` I'm getting this error in python 3.10: ```python Traceback (most recent call last): File "<string>", line 1, in <module> File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/__init__.py", line 34, in <module> from .arrow_dataset import Dataset, concatenate_datasets File "/Use...
false
1,064,660,452
https://api.github.com/repos/huggingface/datasets/issues/3323
https://github.com/huggingface/datasets/pull/3323
3,323
Fix wrongly converted assert
closed
1
2021-11-26T16:05:39
2021-11-26T16:44:12
2021-11-26T16:44:11
eliasws
[]
Seems like this assertion was replaced by an exception but the condition got wrongly converted.
true
1,064,429,705
https://api.github.com/repos/huggingface/datasets/issues/3322
https://github.com/huggingface/datasets/pull/3322
3,322
Add missing tags to XTREME
closed
0
2021-11-26T12:37:05
2021-11-29T13:40:07
2021-11-29T13:40:06
mariosasko
[]
Add missing tags to the XTREME benchmark for better discoverability.
true
1,063,858,386
https://api.github.com/repos/huggingface/datasets/issues/3321
https://github.com/huggingface/datasets/pull/3321
3,321
Update URL of tatoeba subset of xtreme
closed
2
2021-11-25T18:42:31
2021-11-26T10:30:30
2021-11-26T10:30:30
mariosasko
[]
Updates the URL of the tatoeba subset of xtreme. Additionally, replaces `os.path.join` with `xjoin` to correctly join the URL segments on Windows. Fix #3320
true
1,063,531,992
https://api.github.com/repos/huggingface/datasets/issues/3320
https://github.com/huggingface/datasets/issues/3320
3,320
Can't get tatoeba.rus dataset
closed
0
2021-11-25T12:31:11
2021-11-26T10:30:29
2021-11-26T10:30:29
mmg10
[ "bug" ]
## Describe the bug It gives an error. > FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus ## Steps to reproduce the bug ```python data=load_dataset("xtreme","tatoeba.rus", split="validation") ``` ## Solution The library tries...
false
1,062,749,654
https://api.github.com/repos/huggingface/datasets/issues/3319
https://github.com/huggingface/datasets/pull/3319
3,319
Add push_to_hub docs
closed
2
2021-11-24T18:21:11
2021-11-25T14:47:46
2021-11-25T14:47:46
lhoestq
[]
Since #3098 it's now possible to upload a dataset on the Hub directly from python using the `push_to_hub` method. I just added a section in the "Upload a dataset to the Hub" tutorial. I kept the section quite simple but let me know if it sounds good to you @LysandreJik @stevhliu :)
true
1,062,369,717
https://api.github.com/repos/huggingface/datasets/issues/3318
https://github.com/huggingface/datasets/pull/3318
3,318
Finish transition to PyArrow 3.0.0
closed
0
2021-11-24T12:30:14
2021-11-24T15:35:05
2021-11-24T15:35:04
mariosasko
[]
Finish transition to PyArrow 3.0.0 that was started in #3098.
true
1,062,284,447
https://api.github.com/repos/huggingface/datasets/issues/3317
https://github.com/huggingface/datasets/issues/3317
3,317
Add desc parameter to Dataset filter method
closed
4
2021-11-24T11:01:36
2022-01-05T18:31:24
2022-01-05T18:31:24
vblagoje
[ "enhancement" ]
**Is your feature request related to a problem? Please describe.** As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ...
false
1,062,185,822
https://api.github.com/repos/huggingface/datasets/issues/3316
https://github.com/huggingface/datasets/issues/3316
3,316
Add RedCaps dataset
closed
0
2021-11-24T09:23:02
2022-01-12T14:13:15
2022-01-12T14:13:15
albertvillanova
[ "dataset request", "vision" ]
## Adding a Dataset - **Name:** RedCaps - **Description:** Web-curated image-text data created by the people, for the people - **Paper:** https://arxiv.org/abs/2111.11431 - **Data:** https://redcaps.xyz/ - **Motivation:** Multimodal image-text dataset: 12M+ Image-text pairs Instructions to add a new dataset c...
false
1,061,678,452
https://api.github.com/repos/huggingface/datasets/issues/3315
https://github.com/huggingface/datasets/pull/3315
3,315
Removing query params for dynamic URL caching
closed
5
2021-11-23T20:24:12
2021-11-25T14:44:32
2021-11-25T14:44:31
anton-l
[]
The main use case for this is to make dynamically generated private URLs (like the ones returned by CommonVoice API) compatible with the datasets' caching logic. Usage example: ```python import datasets class CommonVoice(datasets.GeneratorBasedBuilder): def _info(self): return datasets.DatasetInfo...
true
1,061,448,227
https://api.github.com/repos/huggingface/datasets/issues/3314
https://github.com/huggingface/datasets/pull/3314
3,314
Adding arg to pass process rank to `map`
closed
1
2021-11-23T15:55:21
2021-11-24T11:54:13
2021-11-24T11:54:13
TevenLeScao
[]
This PR adds a `with_rank` argument to `map` that gives the user the possibility to pass the rank of each process to their function. This is mostly designed for multi-GPU map (each process can be sent to a different device thanks to the rank). I've also added tests. I'm putting the PR up so you can check the code, I'll...
true
1,060,933,392
https://api.github.com/repos/huggingface/datasets/issues/3313
https://github.com/huggingface/datasets/issues/3313
3,313
TriviaQA License Mismatch
closed
1
2021-11-23T08:00:15
2021-11-29T11:24:21
2021-11-29T11:24:21
akhilkedia
[ "bug" ]
## Describe the bug TriviaQA Webpage at http://nlp.cs.washington.edu/triviaqa/ says they do not own the copyright to the data. However, Huggingface datasets at https://huggingface.co/datasets/trivia_qa mentions that the dataset is released under Apache License Is the License Information on HuggingFace correct?
false
1,060,440,346
https://api.github.com/repos/huggingface/datasets/issues/3312
https://github.com/huggingface/datasets/pull/3312
3,312
add bl books genre dataset
closed
6
2021-11-22T17:54:50
2021-12-02T16:10:29
2021-12-02T16:07:47
davanstrien
[]
First of all thanks for the fantastic library/collection of datasets ๐Ÿค— This pull request adds a dataset of metadata from digitised (mostly 19th Century) books from the British Library The [data](https://bl.iro.bl.uk/concern/datasets/1e1ccb46-65b4-4481-b6f8-b8129d5da053) contains various metadata about the books. In...
true
1,060,387,957
https://api.github.com/repos/huggingface/datasets/issues/3311
https://github.com/huggingface/datasets/issues/3311
3,311
Add WebSRC
open
0
2021-11-22T16:58:33
2021-11-22T16:58:33
null
NielsRogge
[ "dataset request" ]
## Adding a Dataset - **Name:** WebSRC - **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata. - **Paper:** https://arxiv.org/abs/2101.0...
false
1,060,098,104
https://api.github.com/repos/huggingface/datasets/issues/3310
https://github.com/huggingface/datasets/issues/3310
3,310
Fatal error condition occurred in aws-c-io
closed
28
2021-11-22T12:27:54
2023-02-08T10:31:05
2021-11-29T22:22:37
Crabzmatic
[ "bug" ]
## Describe the bug Fatal error when using the library ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset('wikiann', 'en') ``` ## Expected results No fatal errors ## Actual results ``` Fatal error condition occurred in D:\bld\aws-c-io_1633633258269\work\sou...
false
1,059,496,154
https://api.github.com/repos/huggingface/datasets/issues/3309
https://github.com/huggingface/datasets/pull/3309
3,309
fix: files counted twice in inferred structure
closed
8
2021-11-21T21:50:38
2021-11-23T17:00:58
2021-11-23T17:00:58
borisdayma
[]
Files were counted twice in a structure like: ``` my_dataset_local_path/ โ”œโ”€โ”€ README.md โ””โ”€โ”€ data/ โ”œโ”€โ”€ train/ โ”‚ โ”œโ”€โ”€ shard_0.csv โ”‚ โ”œโ”€โ”€ shard_1.csv โ”‚ โ”œโ”€โ”€ shard_2.csv โ”‚ โ””โ”€โ”€ shard_3.csv โ””โ”€โ”€ valid/ โ”œโ”€โ”€ shard_0.csv โ””โ”€โ”€ shard_1.csv ``` The reason is that they were ...
true
1,059,255,705
https://api.github.com/repos/huggingface/datasets/issues/3308
https://github.com/huggingface/datasets/issues/3308
3,308
"dataset_infos.json" missing for chr_en and mc4
open
3
2021-11-21T00:07:22
2022-01-19T13:55:32
null
amitness
[ "bug", "dataset bug" ]
## Describe the bug In the repository, every dataset has its metadata in a file called`dataset_infos.json`. But, this file is missing for two datasets: `chr_en` and `mc4`. ## Steps to reproduce the bug Check [chr_en](https://github.com/huggingface/datasets/tree/master/datasets/chr_en) and [mc4](https://github.com/...
false
1,059,226,297
https://api.github.com/repos/huggingface/datasets/issues/3307
https://github.com/huggingface/datasets/pull/3307
3,307
Add IndoNLI dataset
closed
1
2021-11-20T20:46:03
2021-11-25T14:51:48
2021-11-25T14:51:48
afaji
[]
This PR adds IndoNLI dataset, from https://aclanthology.org/2021.emnlp-main.821/
true
1,059,185,860
https://api.github.com/repos/huggingface/datasets/issues/3306
https://github.com/huggingface/datasets/issues/3306
3,306
nested sequence feature won't encode example if the first item of the outside sequence is an empty list
closed
3
2021-11-20T16:57:54
2021-12-08T13:02:15
2021-12-08T13:02:15
function2-llx
[ "bug" ]
## Describe the bug As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list. ## Steps to reproduce the bug ```python from datasets import Features, Sequence, ClassLabel features = Features({ 'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))), ...
false
1,059,161,000
https://api.github.com/repos/huggingface/datasets/issues/3305
https://github.com/huggingface/datasets/pull/3305
3,305
asserts replaced with exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py``
closed
0
2021-11-20T14:51:23
2021-11-22T18:24:32
2021-11-22T17:08:13
Ishan-Kumar2
[]
Addresses #3171 Fixes exception for ``fingerprint.py``, ``search.py``, ``arrow_writer.py`` and ``metric.py`` and modified tests
true
1,059,130,494
https://api.github.com/repos/huggingface/datasets/issues/3304
https://github.com/huggingface/datasets/issues/3304
3,304
Dataset object has no attribute `to_tf_dataset`
closed
1
2021-11-20T12:03:59
2021-11-21T07:07:25
2021-11-21T07:07:25
RajkumarGalaxy
[ "bug" ]
I am following HuggingFace Course. I am at Fine-tuning a model. Link: https://huggingface.co/course/chapter3/2?fw=tf I use tokenize_function and `map` as mentioned in the course to process data. `# define a tokenize function` `def Tokenize_function(example):` ` return tokenizer(example['sentence'], truncat...
false
1,059,129,732
https://api.github.com/repos/huggingface/datasets/issues/3303
https://github.com/huggingface/datasets/issues/3303
3,303
DataCollatorWithPadding: TypeError
closed
1
2021-11-20T11:59:55
2021-11-21T07:05:37
2021-11-21T07:05:37
RajkumarGalaxy
[ "bug" ]
Hi, I am following the HuggingFace course. I am now at Fine-tuning [https://huggingface.co/course/chapter3/3?fw=tf](https://huggingface.co/course/chapter3/3?fw=tf). When I set up `DataCollatorWithPadding` as following I got an error while trying to reproduce the course code in Kaggle. This error occurs with either a ...
false
1,058,907,168
https://api.github.com/repos/huggingface/datasets/issues/3302
https://github.com/huggingface/datasets/pull/3302
3,302
fix old_val typo in f-string
closed
0
2021-11-19T20:51:08
2021-11-25T22:14:43
2021-11-22T17:04:19
Mehdi2402
[]
This PR is to correct a typo in #3277 that @Carlosbogo revieled in a comment. Related closed issue : #3257 Sorry about that ๐Ÿ˜….
true