url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/5353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5353/comments | https://api.github.com/repos/huggingface/datasets/issues/5353/events | https://github.com/huggingface/datasets/issues/5353 | 1,491,880,500 | I_kwDODunzps5Y7Eo0 | 5,353 | Support remote file systems for `Audio` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2022-12-12T13:22:13Z | 2022-12-12T13:37:14Z | 2022-12-12T13:37:14Z | null | ### Feature request
Hi there!
It would be super cool if `Audio()`, and potentially other features, could read files from a remote file system.
### Motivation
Large amounts of data is often stored in buckets. `load_from_disk` is able to retrieve data from cloud storage but to my knowledge actually copies the datasets across first, so if you're working off a system with smaller disk specs (like a VM), you can run out of space very quickly.
### Your contribution
Something like this (for Google Cloud Platform in this instance):
```python
from datasets import Dataset, Audio
import gcsfs
fs = gcsfs.GCSFileSystem()
list_of_audio_fp = {'audio': ['1', '2', '3']}
ds = Dataset.from_dict(list_of_audio_fp)
ds = ds.cast_column("audio", Audio(sampling_rate=16000, fs=fs))
```
Under the hood:
```python
import librosa
from io import BytesIO
def load_audio(fp, sampling_rate=None, fs=None):
if fs is not None:
with fs.open(fp, 'rb') as f:
arr, sr = librosa.load(BytesIO(f), sr=sampling_rate)
else:
# Perform existing io operations
```
Written from memory so some things could be wrong. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5353/timeline | null | completed | null | null | false | [
"Just seen https://github.com/huggingface/datasets/issues/5281"
] |
https://api.github.com/repos/huggingface/datasets/issues/1770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1770/comments | https://api.github.com/repos/huggingface/datasets/issues/1770/events | https://github.com/huggingface/datasets/issues/1770 | 792,698,148 | MDU6SXNzdWU3OTI2OTgxNDg= | 1,770 | how can I combine 2 dataset with different/same features? | [] | closed | false | null | 3 | 2021-01-24T01:26:06Z | 2022-06-01T15:43:15Z | 2022-06-01T15:43:15Z | null | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1770/timeline | null | completed | null | null | false | [
"Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issuecomment-727872188",
"Good to hear.\r\nCurrently I ... |
https://api.github.com/repos/huggingface/datasets/issues/4353 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4353/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4353/comments | https://api.github.com/repos/huggingface/datasets/issues/4353/events | https://github.com/huggingface/datasets/pull/4353 | 1,236,092,176 | PR_kwDODunzps43016x | 4,353 | Don't strip proceeding hyphen | [] | closed | false | null | 1 | 2022-05-14T18:25:29Z | 2022-05-16T18:51:38Z | 2022-05-16T13:52:11Z | null | Closes #4320. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4353/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4353/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4353.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4353",
"merged_at": "2022-05-16T13:52:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4353.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4353"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/122/comments | https://api.github.com/repos/huggingface/datasets/issues/122/events | https://github.com/huggingface/datasets/pull/122 | 618,813,182 | MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3 | 122 | Final cleanup of readme and metrics | [] | closed | false | null | 0 | 2020-05-15T09:00:52Z | 2021-09-03T19:40:09Z | 2020-05-15T09:02:22Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/122/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/122/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/122.diff",
"html_url": "https://github.com/huggingface/datasets/pull/122",
"merged_at": "2020-05-15T09:02:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/122.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/122"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/5587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5587/comments | https://api.github.com/repos/huggingface/datasets/issues/5587/events | https://github.com/huggingface/datasets/pull/5587 | 1,603,139,420 | PR_kwDODunzps5K70pp | 5,587 | Fix `sort` with indices mapping | [] | closed | false | null | 3 | 2023-02-28T14:05:08Z | 2023-02-28T17:28:57Z | 2023-02-28T17:21:58Z | null | Fixes the `key` range in the `query_table` call in `sort` to account for an indices mapping
Fix #5586 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5587/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5587.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5587",
"merged_at": "2023-02-28T17:21:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5587.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5587"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/2563 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2563/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2563/comments | https://api.github.com/repos/huggingface/datasets/issues/2563/events | https://github.com/huggingface/datasets/issues/2563 | 932,387,639 | MDU6SXNzdWU5MzIzODc2Mzk= | 2,563 | interleave_datasets for map-style datasets | [] | closed | false | null | 0 | 2021-06-29T08:57:24Z | 2021-07-01T09:33:33Z | 2021-07-01T09:33:33Z | null | Currently the `interleave_datasets` functions only works for `IterableDataset`.
Let's make it work for map-style `Dataset` objects as well.
It would work the same way: either alternate between the datasets in order or randomly given probabilities specified by the user. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2563/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2563/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4526/comments | https://api.github.com/repos/huggingface/datasets/issues/4526/events | https://github.com/huggingface/datasets/issues/4526 | 1,276,580,185 | I_kwDODunzps5MFxFZ | 4,526 | split cache used when processing different split | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 2 | 2022-06-20T08:44:58Z | 2022-06-28T14:04:58Z | null | null | ## Describe the bug`
```
ds1 = load_dataset('squad', split='validation')
ds2 = load_dataset('squad', split='train')
ds1 = ds1.map(some_function)
ds2 = ds2.map(some_function)
assert ds1 == ds2
```
This happens when ds1 and ds2 are created in `pytorch_lightning.DataModule` through
```
class myDataModule:
def train_dataloader(self):
ds = load_dataset('squad', split='train')
ds = ds.map(some_function)
return [ds]
def val_dataloader(self):
ds = load_dataset('squad', split="validation")
ds = ds.map(some_function)
return [ds]
```
I don't know if it depends on `pytorch_lightning` or `datasets` but setting `ds.map(some_function, load_from_cache_file=False)` fixes the issue.
If this is not enough to replicate I will try and provide and MWE, I don't have time now so I thought I wuld open the issue first! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4526/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4526/timeline | null | null | null | null | false | [
"I was not able to reproduce this behavior (I tried without using pytorch lightning though, since I don't know what code you ran in pytorch lightning to get this).\r\n\r\nIf you can provide a MWE that would be perfect ! :)",
"Hi, I think the issue happened because I was loading datasets under an `if` ... `else` s... |
https://api.github.com/repos/huggingface/datasets/issues/1685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1685/comments | https://api.github.com/repos/huggingface/datasets/issues/1685/events | https://github.com/huggingface/datasets/pull/1685 | 778,914,431 | MDExOlB1bGxSZXF1ZXN0NTQ4OTM1MzY2 | 1,685 | Update README.md of covid-tweets-japanese | [] | closed | false | null | 1 | 2021-01-05T11:47:27Z | 2021-01-06T10:27:12Z | 2021-01-06T09:31:10Z | null | Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402.
- Update "Data Splits" to be more precise that no information is provided for now.
- old: [More Information Needed]
- new: No information about data splits is provided for now.
- The automatic generation of links seemed not working properly, so I added a space before and after the URL to make the links work correctly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1685/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1685/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1685.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1685",
"merged_at": "2021-01-06T09:31:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1685.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1685"
} | true | [
"Thanks for reviewing and merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4403/comments | https://api.github.com/repos/huggingface/datasets/issues/4403/events | https://github.com/huggingface/datasets/pull/4403 | 1,248,390,134 | PR_kwDODunzps44dcpl | 4,403 | Uncomment logging deactivation for ArrowBasedBuilder | [] | closed | false | null | 1 | 2022-05-25T16:46:15Z | 2022-05-31T08:33:36Z | 2022-05-31T08:25:02Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4403/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4403.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4403",
"merged_at": "2022-05-31T08:25:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4403.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4403"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2539/comments | https://api.github.com/repos/huggingface/datasets/issues/2539/events | https://github.com/huggingface/datasets/pull/2539 | 927,952,429 | MDExOlB1bGxSZXF1ZXN0Njc2MDI5MDY5 | 2,539 | remove wi_locness dataset due to licensing issues | [] | closed | false | null | 5 | 2021-06-23T07:35:32Z | 2021-06-25T14:52:42Z | 2021-06-25T14:52:42Z | null | It was brought to my attention that this dataset's license is not only missing, but also prohibits redistribution. I contacted the original author to apologize for this oversight and asked if we could still use it, but unfortunately we can't and the author kindly asked to take down this dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2539/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2539.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2539",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2539.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2539"
} | true | [
"Hi ! I'm sorry to hear that.\r\nThough we are not redistributing the dataset, we just provide a python script that downloads and process the dataset from its original source hosted at https://www.cl.cam.ac.uk\r\n\r\nTherefore I'm not sure what's the issue with licensing. What do you mean exactly ?",
"I think tha... |
https://api.github.com/repos/huggingface/datasets/issues/2387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2387/comments | https://api.github.com/repos/huggingface/datasets/issues/2387/events | https://github.com/huggingface/datasets/issues/2387 | 897,566,666 | MDU6SXNzdWU4OTc1NjY2NjY= | 2,387 | datasets 1.6 ignores cache | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 13 | 2021-05-21T00:12:58Z | 2021-05-26T16:07:54Z | 2021-05-26T16:07:54Z | null | Moving from https://github.com/huggingface/transformers/issues/11801#issuecomment-845546612
Quoting @VictorSanh:
>
> I downgraded datasets to `1.5.0` and printed `tokenized_datasets.cache_files` (L335):
>
> > `{'train': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-c6aefe81ca4e5152.arrow'}], 'validation': [{'filename': '/home/victor/.cache/huggingface/datasets/openwebtext10k/plain_text/1.0.0/3a8df094c671b4cb63ed0b41f40fb3bd855e9ce2e3765e5df50abcdfb5ec144b/cache-97cf4c813e6469c6.arrow'}]}`
>
> while the same command with the latest version of datasets (actually starting at `1.6.0`) gives:
> > `{'train': [], 'validation': []}`
>
I also confirm that downgrading to `datasets==1.5.0` makes things fast again - i.e. cache is used.
to reproduce:
```
USE_TF=0 python examples/pytorch/language-modeling/run_clm.py \
--model_name_or_path gpt2 \
--dataset_name "stas/openwebtext-10k" \
--output_dir output_dir \
--overwrite_output_dir \
--do_train \
--do_eval \
--max_train_samples 1000 \
--max_eval_samples 200 \
--per_device_train_batch_size 4 \
--per_device_eval_batch_size 4 \
--num_train_epochs 1 \
--warmup_steps 8 \
--block_size 64 \
--fp16 \
--report_to none
```
the first time the startup is slow and some 5 tqdm bars. It shouldn't do it on consequent runs. but with `datasets>1.5.0` it rebuilds on every run.
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2387/timeline | null | completed | null | null | false | [
"Looks like there are multiple issues regarding this (#2386, #2322) and it's a WIP #2329. Currently these datasets are being loaded in-memory which is causing this issue. Quoting @mariosasko here for a quick fix:\r\n\r\n> set `keep_in_memory` to `False` when loading a dataset (`sst = load_dataset(\"sst\", keep_in_m... |
https://api.github.com/repos/huggingface/datasets/issues/2895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2895/comments | https://api.github.com/repos/huggingface/datasets/issues/2895/events | https://github.com/huggingface/datasets/pull/2895 | 993,462,274 | MDExOlB1bGxSZXF1ZXN0NzMxNjQ0NTY2 | 2,895 | Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast | [] | closed | false | null | 0 | 2021-09-10T17:56:57Z | 2021-09-21T22:50:01Z | 2021-09-21T08:18:35Z | null | This PR partially addresses #2252.
``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.replace_schema_metadata`` which is much faster. This PR adds a ``replace_schema_metadata`` method to all table classes, and modifies ``update_metadata_with_features`` to use it instead of ``cast``. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2895/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2895.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2895",
"merged_at": "2021-09-21T08:18:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2895.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2895"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2814 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2814/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2814/comments | https://api.github.com/repos/huggingface/datasets/issues/2814/events | https://github.com/huggingface/datasets/pull/2814 | 973,632,645 | MDExOlB1bGxSZXF1ZXN0NzE1MDUwODc4 | 2,814 | Bump tqdm version | [] | closed | false | null | 0 | 2021-08-18T12:51:29Z | 2021-08-18T13:44:11Z | 2021-08-18T13:39:50Z | null | The recently released tqdm 4.62.1 includes a fix for PermissionError on Windows (submitted by me in https://github.com/tqdm/tqdm/pull/1207), which means we can remove expensive `gc.collect` calls by bumping tqdm to that version. This PR does exactly that and, additionally, fixes a `disable_tqdm` definition that would previously, if used, raise a PermissionError on Windows. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2814/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2814/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2814.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2814",
"merged_at": "2021-08-18T13:39:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2814.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2814"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1021/comments | https://api.github.com/repos/huggingface/datasets/issues/1021/events | https://github.com/huggingface/datasets/pull/1021 | 755,644,559 | MDExOlB1bGxSZXF1ZXN0NTMxMzE4MTQw | 1,021 | Add Gutenberg time references dataset | [] | closed | false | null | 1 | 2020-12-02T22:05:26Z | 2020-12-03T10:33:39Z | 2020-12-03T10:33:38Z | null | This PR adds the gutenberg_time dataset: https://arxiv.org/abs/2011.04124 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1021/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1021",
"merged_at": "2020-12-03T10:33:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1021"
} | true | [
"Description: \"A clean data resource containing all explicit time references in a dataset of 52,183 novels whose full text is available via Project Gutenberg and the Hathi Trust Digital Library 2.\" > This is just the Gutenberg part.\r\n\r\nAlso, the paragraph at the top of the file would make a good Dataset Summa... |
https://api.github.com/repos/huggingface/datasets/issues/3529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3529/comments | https://api.github.com/repos/huggingface/datasets/issues/3529/events | https://github.com/huggingface/datasets/pull/3529 | 1,093,846,356 | PR_kwDODunzps4wiPA9 | 3,529 | Update README.md | [] | closed | false | null | 0 | 2022-01-04T23:52:47Z | 2022-01-05T12:50:15Z | 2022-01-05T12:50:14Z | null | Updating licensing information & personal and sensitive information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3529/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3529/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3529.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3529",
"merged_at": "2022-01-05T12:50:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3529.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3529"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/578 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/578/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/578/comments | https://api.github.com/repos/huggingface/datasets/issues/578/events | https://github.com/huggingface/datasets/pull/578 | 694,849,940 | MDExOlB1bGxSZXF1ZXN0NDgxMTczNDE0 | 578 | Add CommonGen Dataset | [] | closed | false | null | 0 | 2020-09-07T08:17:17Z | 2020-09-07T11:50:29Z | 2020-09-07T11:49:07Z | null | CC Authors:
@yuchenlin @MichaelZhouwang | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/578/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/578/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/578.diff",
"html_url": "https://github.com/huggingface/datasets/pull/578",
"merged_at": "2020-09-07T11:49:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/578.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/578"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1602/comments | https://api.github.com/repos/huggingface/datasets/issues/1602/events | https://github.com/huggingface/datasets/pull/1602 | 770,841,810 | MDExOlB1bGxSZXF1ZXN0NTQyNTA4NTM4 | 1,602 | second update of id_newspapers_2018 | [] | closed | false | null | 0 | 2020-12-18T12:16:37Z | 2020-12-22T10:41:15Z | 2020-12-22T10:41:14Z | null | The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"].
I added also an additional POC. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1602/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1602",
"merged_at": "2020-12-22T10:41:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1602"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4590 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4590/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4590/comments | https://api.github.com/repos/huggingface/datasets/issues/4590/events | https://github.com/huggingface/datasets/pull/4590 | 1,287,941,058 | PR_kwDODunzps46htv0 | 4,590 | Generalize meta_path json file creation in load.py [#4540] | [] | closed | false | null | 4 | 2022-06-28T21:48:06Z | 2022-07-08T14:55:13Z | 2022-07-07T13:17:45Z | null | # What does this PR do?
## Summary
*In function `_copy_script_and_other_resources_in_importable_dir`, using string split when generating `meta_path` throws error in the edge case raised in #4540.*
## Additions
-
## Changes
- Changed meta_path to use `os.path.splitext` instead of using `str.split` to generalize code.
## Deletions
-
## Issues Addressed :
Fixes #4540 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4590/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4590/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4590.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4590",
"merged_at": "2022-07-07T13:17:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4590.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4590"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova, Can you please review this PR for Issue #4540 ",
"@lhoestq Thank you for merging the PR . Is there a slack channel for contributing to the datasets library. I would love to work on the library and make meaningfu... |
https://api.github.com/repos/huggingface/datasets/issues/1601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1601/comments | https://api.github.com/repos/huggingface/datasets/issues/1601/events | https://github.com/huggingface/datasets/pull/1601 | 770,758,914 | MDExOlB1bGxSZXF1ZXN0NTQyNDQzNDE3 | 1,601 | second update of the id_newspapers_2018 | [] | closed | false | null | 1 | 2020-12-18T10:10:20Z | 2020-12-18T12:15:31Z | 2020-12-18T12:15:31Z | null | The feature "url" is currently set wrongly to data["date"], this PR fix it to data["url"].
I added also an additional POC. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1601/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1601/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1601",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1601"
} | true | [
"I close this PR, since it based on 1 week old repo. And I will create a new one"
] |
https://api.github.com/repos/huggingface/datasets/issues/4021 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4021/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4021/comments | https://api.github.com/repos/huggingface/datasets/issues/4021/events | https://github.com/huggingface/datasets/pull/4021 | 1,180,805,092 | PR_kwDODunzps41BLAf | 4,021 | Fix `map` remove_columns on empty dataset | [] | closed | false | null | 1 | 2022-03-25T13:36:29Z | 2022-03-29T13:41:31Z | 2022-03-29T13:35:44Z | null | On an empty dataset, the `remove_columns` parameter of `map` currently doesn't actually remove the columns:
```python
>>> ds = datasets.load_dataset("glue", "rte")
>>> ds_filtered = ds.filter(lambda x: x["label"] != -1)
>>> ds_mapped = ds_filtered.map(lambda x: x, remove_columns=["label"])
>>> print(repr(ds_mapped.column_names))
{
'train': ['sentence1', 'sentence2', 'idx'],
'validation': ['sentence1', 'sentence2', 'idx'],
'test': ['sentence1', 'sentence2', 'label', 'idx']
}
```
I fixed this error and updated the tests | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4021/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4021/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4021.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4021",
"merged_at": "2022-03-29T13:35:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4021.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4021"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/297 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/297/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/297/comments | https://api.github.com/repos/huggingface/datasets/issues/297/events | https://github.com/huggingface/datasets/issues/297 | 643,444,625 | MDU6SXNzdWU2NDM0NDQ2MjU= | 297 | Error in Demo for Specific Datasets | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 3 | 2020-06-23T00:38:42Z | 2020-07-17T17:43:06Z | 2020-07-17T17:43:06Z | null | Selecting `natural_questions` or `newsroom` dataset in the online demo results in an error similar to the following.

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/297/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/297/timeline | null | completed | null | null | false | [
"Thanks for reporting these errors :)\r\n\r\nI can actually see two issues here.\r\n\r\nFirst, datasets like `natural_questions` require apache_beam to be processed. Right now the import is not at the right place so we have this error message. However, even the imports are fixed, the nlp viewer doesn't actually hav... |
https://api.github.com/repos/huggingface/datasets/issues/665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/665/comments | https://api.github.com/repos/huggingface/datasets/issues/665/events | https://github.com/huggingface/datasets/issues/665 | 707,037,738 | MDU6SXNzdWU3MDcwMzc3Mzg= | 665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | [] | closed | false | null | 8 | 2020-09-23T04:28:14Z | 2020-10-08T09:32:16Z | 2020-10-08T09:32:16Z | null | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode_plus(input_pairs, pad_to_max_length=True, max_length=512)
context_encodings = tokenizer.encode_plus(example['context'])
# Compute start and end tokens for labels using Transformers's fast tokenizers alignement methodes.
# this will give us the position of answer span in the context text
start_idx, end_idx = get_correct_alignement(example['context'], example['answers'])
start_positions_context = context_encodings.char_to_token(start_idx)
end_positions_context = context_encodings.char_to_token(end_idx-1)
# here we will compute the start and end position of the answer in the whole example
# as the example is encoded like this <s> question</s></s> context</s>
# and we know the postion of the answer in the context
# we can just find out the index of the sep token and then add that to position + 1 (+1 because there are two sep tokens)
# this will give us the position of the answer span in whole example
sep_idx = encodings['input_ids'].index(tokenizer.sep_token_id)
start_positions = start_positions_context + sep_idx + 1
end_positions = end_positions_context + sep_idx + 1
if end_positions > 512:
start_positions, end_positions = 0, 0
encodings.update({'start_positions': start_positions,
'end_positions': end_positions,
'attention_mask': encodings['attention_mask']})
return encodings
```
Then I run `dataset.map(convert_to_features)`, it raise
```
In [59]: a.map(convert_to_features)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-59-c453b508761d> in <module>
----> 1 a.map(convert_to_features)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1242 fn_kwargs=fn_kwargs,
1243 new_fingerprint=new_fingerprint,
-> 1244 update_data=update_data,
1245 )
1246 else:
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
151 "output_all_columns": self._output_all_columns,
152 }
--> 153 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
154 if new_format["columns"] is not None:
155 new_format["columns"] = list(set(new_format["columns"]) & set(out.column_names))
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
156 kwargs_for_fingerprint["fingerprint_name"] = fingerprint_name
157 kwargs[fingerprint_name] = update_fingerprint(
--> 158 self._fingerprint, transform, kwargs_for_fingerprint
159 )
160
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update_fingerprint(fingerprint, transform, transform_args)
103 for key in sorted(transform_args):
104 hasher.update(key)
--> 105 hasher.update(transform_args[key])
106 return hasher.hexdigest()
107
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in update(self, value)
55 def update(self, value):
56 self.m.update(f"=={type(value)}==".encode("utf8"))
---> 57 self.m.update(self.hash(value).encode("utf-8"))
58
59 def hexdigest(self):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash(cls, value)
51 return cls.dispatch[type(value)](cls, value)
52 else:
---> 53 return cls.hash_default(value)
54
55 def update(self, value):
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in hash_default(cls, value)
44 @classmethod
45 def hash_default(cls, value):
---> 46 return cls.hash_bytes(dumps(value))
47
48 @classmethod
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dumps(obj)
365 file = StringIO()
366 with _no_cache_fields(obj):
--> 367 dump(obj, file)
368 return file.getvalue()
369
/opt/conda/lib/python3.7/site-packages/datasets/utils/py_utils.py in dump(obj, file)
337 def dump(obj, file):
338 """pickle an object to a file"""
--> 339 Pickler(file, recurse=True).dump(obj)
340 return
341
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in dump(self, obj)
444 raise PicklingError(msg)
445 else:
--> 446 StockPickler.dump(self, obj)
447 stack.clear() # clear record of 'recursion-sensitive' pickled objects
448 return
/opt/conda/lib/python3.7/pickle.py in dump(self, obj)
435 if self.proto >= 4:
436 self.framer.start_framing()
--> 437 self.save(obj)
438 self.write(STOP)
439 self.framer.end_framing()
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_function(pickler, obj)
1436 globs, obj.__name__,
1437 obj.__defaults__, obj.__closure__,
-> 1438 obj.__dict__, fkwdefaults), obj=obj)
1439 else:
1440 _super = ('super' in getattr(obj.func_code,'co_names',())) and (_byref is not None) and getattr(pickler, '_recurse', False)
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
636 else:
637 save(func)
--> 638 save(args)
639 write(REDUCE)
640
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/pickle.py in save_tuple(self, obj)
787 write(MARK)
788 for element in obj:
--> 789 save(element)
790
791 if id(obj) in memo:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
547
548 # Save the reduce() output and finally memoize the object
--> 549 self.save_reduce(obj=obj, *rv)
550
551 def persistent_id(self, obj):
/opt/conda/lib/python3.7/pickle.py in save_reduce(self, func, args, state, listitems, dictitems, obj)
660
661 if state is not None:
--> 662 save(state)
663 write(BUILD)
664
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
502 f = self.dispatch.get(t)
503 if f is not None:
--> 504 f(self, obj) # Call unbound method with explicit self
505 return
506
/opt/conda/lib/python3.7/site-packages/dill/_dill.py in save_module_dict(pickler, obj)
931 # we only care about session the first pass thru
932 pickler._session = False
--> 933 StockPickler.save_dict(pickler, obj)
934 log.info("# D2")
935 return
/opt/conda/lib/python3.7/pickle.py in save_dict(self, obj)
857
858 self.memoize(obj)
--> 859 self._batch_setitems(obj.items())
860
861 dispatch[dict] = save_dict
/opt/conda/lib/python3.7/pickle.py in _batch_setitems(self, items)
883 for k, v in tmp:
884 save(k)
--> 885 save(v)
886 write(SETITEMS)
887 elif n:
/opt/conda/lib/python3.7/pickle.py in save(self, obj, save_persistent_id)
522 reduce = getattr(obj, "__reduce_ex__", None)
523 if reduce is not None:
--> 524 rv = reduce(self.proto)
525 else:
526 reduce = getattr(obj, "__reduce__", None)
TypeError: can't pickle Tokenizer objects
```
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/665/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/665/timeline | null | completed | null | null | false | [
"Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers/datasets are you using ?",
"transformers and datasets are both the latest",
"Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Co... |
https://api.github.com/repos/huggingface/datasets/issues/5342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5342/comments | https://api.github.com/repos/huggingface/datasets/issues/5342/events | https://github.com/huggingface/datasets/issues/5342 | 1,485,244,178 | I_kwDODunzps5YhwcS | 5,342 | Emotion dataset cannot be downloaded | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 7 | 2022-12-08T19:07:09Z | 2023-02-23T19:13:19Z | 2022-12-09T10:46:11Z | null | ### Describe the bug
The emotion dataset gives a FileNotFoundError. The full error is: `FileNotFoundError: Couldn't find file at https://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt?dl=1`.
It was working yesterday (December 7, 2022), but stopped working today (December 8, 2022).
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("emotion")
```
### Expected behavior
The dataset should load properly.
### Environment info
- `datasets` version: 2.7.1
- Platform: Windows-10-10.0.19045-SP0
- Python version: 3.9.13
- PyArrow version: 10.0.1
- Pandas version: 1.5.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5342/timeline | null | completed | null | null | false | [
"Hi @cbarond there's already an open issue at https://github.com/dair-ai/emotion_dataset/issues/5, as the data seems to be missing now, so check that issue instead 👍🏻 ",
"Thanks @cbarond for reporting and @alvarobartt for pointing to the issue we opened in the author's repo.\r\n\r\nIndeed, this issue was first ... |
https://api.github.com/repos/huggingface/datasets/issues/5996 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5996/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5996/comments | https://api.github.com/repos/huggingface/datasets/issues/5996/events | https://github.com/huggingface/datasets/pull/5996 | 1,779,294,374 | PR_kwDODunzps5UKP0i | 5,996 | Deprecate `use_auth_token` in favor of `token` | [] | closed | false | null | 9 | 2023-06-28T16:26:38Z | 2023-07-05T15:22:20Z | 2023-07-03T16:03:33Z | null | ... to be consistent with `transformers` and `huggingface_hub`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5996/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5996/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5996.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5996",
"merged_at": "2023-07-03T16:03:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5996.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5996"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2379 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2379/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2379/comments | https://api.github.com/repos/huggingface/datasets/issues/2379/events | https://github.com/huggingface/datasets/pull/2379 | 895,252,597 | MDExOlB1bGxSZXF1ZXN0NjQ3NDk2ODUx | 2,379 | Disallow duplicate keys in yaml tags | [] | closed | false | null | 0 | 2021-05-19T10:10:07Z | 2021-05-19T10:45:32Z | 2021-05-19T10:45:31Z | null | Make sure that there's no duplidate keys in yaml tags.
I added the check in the yaml tree constructor's method, so that the verification is done at every level in the yaml structure.
cc @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2379/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2379/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2379.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2379",
"merged_at": "2021-05-19T10:45:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2379.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2379"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2232/comments | https://api.github.com/repos/huggingface/datasets/issues/2232/events | https://github.com/huggingface/datasets/pull/2232 | 860,075,931 | MDExOlB1bGxSZXF1ZXN0NjE3MDQyNTI4 | 2,232 | Start filling GLUE dataset card | [] | closed | false | null | 2 | 2021-04-16T18:37:37Z | 2021-04-21T09:33:09Z | 2021-04-21T09:33:08Z | null | The dataset card was pretty much empty.
I added the descriptions (mainly from TFDS since the script is the same), and I also added the tasks tags as well as examples for a subset of the tasks.
cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2232/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2232",
"merged_at": "2021-04-21T09:33:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2232"
} | true | [
"I replaced all the \"we\" and applied your suggestion",
"Merging this for now, we can continue improving this card in other PRs :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5189/comments | https://api.github.com/repos/huggingface/datasets/issues/5189/events | https://github.com/huggingface/datasets/issues/5189 | 1,432,769,143 | I_kwDODunzps5VZlJ3 | 5,189 | Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 33 | 2022-11-02T09:15:02Z | 2022-12-06T12:13:17Z | null | null | ### Feature request
Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark)
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
print(next(iter(dataset["train"])))
```
`datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors.
It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default.
```diff
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
-print(next(iter(dataset["train"])))
+print(next(iter(dataset)))
```
### Motivation
I explained it above 😅
### Your contribution
I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5189/timeline | null | null | null | null | false | [
"I have to admit I'm not a fan of this idea, as this would result in a non-consistent behavior between tabular and non-tabular datasets, which is confusing if done without the context you provided. Instead, we could consider returning a `Dataset` object rather than `DatasetDict` if there is only one split in the ge... |
https://api.github.com/repos/huggingface/datasets/issues/5713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5713/comments | https://api.github.com/repos/huggingface/datasets/issues/5713/events | https://github.com/huggingface/datasets/issues/5713 | 1,657,141,251 | I_kwDODunzps5ixfgD | 5,713 | ArrowNotImplementedError when loading dataset from the hub | [] | closed | false | null | 2 | 2023-04-06T10:27:22Z | 2023-04-06T13:06:22Z | 2023-04-06T13:06:21Z | null | ### Describe the bug
Hello,
I have created a dataset by using the image loader. Once the dataset is created I try to download it and I get the error:
```
Traceback (most recent call last):
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1860, in _prepare_split_single
for _, table in generator:
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 69, in _generate_tables
for batch_idx, record_batch in enumerate(
File "pyarrow/_parquet.pyx", line 1323, in iter_batches
File "pyarrow/error.pxi", line 121, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Nested data conversions not implemented for chunked array outputs
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/jplu/miniconda3/envs/image-xp/lib/python3.10/site-packages/datasets/builder.py", line 1893, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Steps to reproduce the bug
Create the dataset and push it to the hub:
```python
from datasets import load_dataset
dataset = load_dataset("imagefolder", data_dir="/path/to/dataset")
dataset.push_to_hub("org/dataset-name", private=True, max_shard_size="1GB")
```
Then use it:
```python
from datasets import load_dataset
dataset = load_dataset("org/dataset-name")
```
### Expected behavior
To properly download and use the pushed dataset.
Something else to note is that I specified to have shards of 1GB max, but at the end, for the train set, it is an almost 7GB single file that is pushed.
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
- Python version: 3.10.10
- Huggingface_hub version: 0.13.3
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5713/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5713/timeline | null | completed | null | null | false | [
"Hi Julien ! This sounds related to https://github.com/huggingface/datasets/issues/5695 - TL;DR: you need to have shards smaller than 2GB to avoid this issue\r\n\r\nThe number of rows per shard is computed using an estimated size of the full dataset, which can sometimes lead to shards bigger than `max_shard_size`. ... |
https://api.github.com/repos/huggingface/datasets/issues/1954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1954/comments | https://api.github.com/repos/huggingface/datasets/issues/1954/events | https://github.com/huggingface/datasets/issues/1954 | 817,565,563 | MDU6SXNzdWU4MTc1NjU1NjM= | 1,954 | add a new column | [] | closed | false | null | 2 | 2021-02-26T18:17:27Z | 2021-04-29T14:50:43Z | 2021-04-29T14:50:43Z | null | Hi
I'd need to add a new column to the dataset, I was wondering how this can be done? thanks
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1954/timeline | null | completed | null | null | false | [
"Hi\r\nnot sure how change the lable after creation, but this is an issue not dataset request. thanks ",
"Hi ! Currently you have to use `map` . You can see an example of how to do it in this comment: https://github.com/huggingface/datasets/issues/853#issuecomment-727872188\r\n\r\nIn the future we'll add support ... |
https://api.github.com/repos/huggingface/datasets/issues/1845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1845/comments | https://api.github.com/repos/huggingface/datasets/issues/1845/events | https://github.com/huggingface/datasets/pull/1845 | 803,714,493 | MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz | 1,845 | Enable logging propagation and remove logging handler | [] | closed | false | null | 1 | 2021-02-08T16:22:13Z | 2021-02-09T14:22:38Z | 2021-02-09T14:22:37Z | null | We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691
But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826
I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library):
> It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements.
It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management.
Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`.
cc @albertvillanova this should let you use capsys/caplog in pytest
cc @LysandreJik @sgugger if you want to do the same in `transformers` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1845/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1845",
"merged_at": "2021-02-09T14:22:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1845"
} | true | [
"Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- ... |
https://api.github.com/repos/huggingface/datasets/issues/772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/772/comments | https://api.github.com/repos/huggingface/datasets/issues/772/events | https://github.com/huggingface/datasets/pull/772 | 731,612,430 | MDExOlB1bGxSZXF1ZXN0NTExNjg4ODMx | 772 | Fix metric with cache dir | [] | closed | false | null | 0 | 2020-10-28T16:43:13Z | 2020-10-29T09:34:44Z | 2020-10-29T09:34:43Z | null | The cache_dir provided by the user was concatenated twice and therefore causing FileNotFound errors.
The tests didn't cover the case of providing `cache_dir=` for metrics because of a stupid issue (it was not using the right parameter).
I remove the double concatenation and I fixed the tests
Fix #728 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/772/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/772",
"merged_at": "2020-10-29T09:34:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/772"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1987/comments | https://api.github.com/repos/huggingface/datasets/issues/1987/events | https://github.com/huggingface/datasets/issues/1987 | 822,308,956 | MDU6SXNzdWU4MjIzMDg5NTY= | 1,987 | wmt15 is broken | [] | closed | false | null | 1 | 2021-03-04T16:46:25Z | 2022-10-05T13:12:26Z | 2022-10-05T13:12:26Z | null | While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken:
```
python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")'
Downloading: 2.91kB [00:00, 818kB/s]
Downloading: 3.02kB [00:00, 897kB/s]
Downloading: 41.1kB [00:00, 19.1MB/s]
Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested
mapped = [
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1987/timeline | null | completed | null | null | false | [
"It's reachable for the viewer and me, so I suppose it was down at that moment?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2341/comments | https://api.github.com/repos/huggingface/datasets/issues/2341/events | https://github.com/huggingface/datasets/pull/2341 | 882,370,933 | MDExOlB1bGxSZXF1ZXN0NjM1OTExODI2 | 2,341 | Added the Ascent KB | [] | closed | false | null | 1 | 2021-05-09T14:17:39Z | 2021-05-11T09:16:59Z | 2021-05-11T09:16:59Z | null | Added the Ascent Commonsense KB of 8.9M assertions.
- Paper: [Advanced Semantics for Commonsense Knowledge Extraction (WWW'21)](https://arxiv.org/abs/2011.00905)
- Website: https://ascent.mpi-inf.mpg.de/
(I am the author of the dataset) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2341/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2341.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2341",
"merged_at": "2021-05-11T09:16:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2341.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2341"
} | true | [
"Thanks for approving it!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3485 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3485/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3485/comments | https://api.github.com/repos/huggingface/datasets/issues/3485/events | https://github.com/huggingface/datasets/issues/3485 | 1,089,027,581 | I_kwDODunzps5A6T39 | 3,485 | skip columns which cannot set to specific format when set_format | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2021-12-27T07:19:55Z | 2021-12-27T09:07:07Z | 2021-12-27T09:07:07Z | null | **Is your feature request related to a problem? Please describe.**
When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns.
**Describe the solution you'd like**
skip columns which cannot set to specific format when set_format instead of raise an error.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3485/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3485/timeline | null | completed | null | null | false | [
"You can add columns that you wish to set into `torch` format using `dataset.set_format(\"torch\", ['id', 'abc'])` so that input batch of the transform only contains those columns",
"Sorry, I miss `output_all_columns` args and thought after `dataset.set_format(\"torch\", columns=columns)` I can only get specific ... |
https://api.github.com/repos/huggingface/datasets/issues/4529 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4529/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4529/comments | https://api.github.com/repos/huggingface/datasets/issues/4529/events | https://github.com/huggingface/datasets/issues/4529 | 1,276,729,303 | I_kwDODunzps5MGVfX | 4,529 | Ecoset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2022-06-20T10:39:34Z | 2022-06-21T16:17:16Z | null | null | ## Adding a Dataset
- **Name:** *Ecoset*
- **Description:** *https://www.kietzmannlab.org/ecoset/*
- **Paper:** *https://doi.org/10.1073/pnas.2011417118*
- **Data:** *https://codeocean.com/capsule/9570390/tree/v1*
- **Motivation:**
**Ecoset** was created as a clean and ecologically valid alternative to **Imagenet**.
It is a large image recognition dataset, similar to Imagenet in size and structure. However, the authors of ecoset claim several improvements over Imagenet, like:
- more ecologically valid classes (e.g. not over-focussed on distinguishing different dog breeds)
- less NSFW content
- 'pre-packed image recognition models' that come with the dataset and can be used for validation of other models.
I am working for one of the authors of the paper with the aim of bringing Ecoset to huggingface datasets. Therefore I can work on this issue personally, but could use some help from devs and experienced users if the dataset is of interest to them. I phrased some of my questions on [discuss.huggingface](https://discuss.huggingface.co/t/handling-large-image-datasets/19373).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4529/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4529/timeline | null | null | null | null | false | [
"Hi! Very cool dataset! I answered your questions on the forum. Also, feel free to comment `#self-assign` on this issue to self-assign it."
] |
https://api.github.com/repos/huggingface/datasets/issues/944 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/944/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/944/comments | https://api.github.com/repos/huggingface/datasets/issues/944/events | https://github.com/huggingface/datasets/pull/944 | 754,228,947 | MDExOlB1bGxSZXF1ZXN0NTMwMTY0NTU5 | 944 | Add German Legal Entity Recognition Dataset | [] | closed | false | null | 1 | 2020-12-01T09:38:22Z | 2020-12-03T13:06:56Z | 2020-12-03T13:06:55Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/944/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/944/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/944.diff",
"html_url": "https://github.com/huggingface/datasets/pull/944",
"merged_at": "2020-12-03T13:06:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/944.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/944"
} | true | [
"thanks ! merging this one"
] | |
https://api.github.com/repos/huggingface/datasets/issues/1402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1402/comments | https://api.github.com/repos/huggingface/datasets/issues/1402/events | https://github.com/huggingface/datasets/pull/1402 | 760,538,325 | MDExOlB1bGxSZXF1ZXN0NTM1MzUzMzE0 | 1,402 | adding covid-tweets-japanese (again) | [] | closed | false | null | 4 | 2020-12-09T17:46:46Z | 2020-12-13T17:54:14Z | 2020-12-13T17:47:36Z | null | I had mistaken use git rebase, I was so hurried to fix it. However, I didn't fully consider the use of git reset , so I unintendedly stopped PR (#1367) altogether. Sorry about that.
I'll make a new PR. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1402/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1402/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1402.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1402",
"merged_at": "2020-12-13T17:47:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1402.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1402"
} | true | [
"README.md is not created yet. I'll add it soon.",
"Thank you for your detailed code review! It's so helpful.\r\nI'll reflect them to the code in 24 hours.\r\n\r\nYou may have told me in Slack (I cannot find the conversation log though I've looked through threads), but I'm sorry it seems I'm still misunderstandin... |
https://api.github.com/repos/huggingface/datasets/issues/2767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2767/comments | https://api.github.com/repos/huggingface/datasets/issues/2767/events | https://github.com/huggingface/datasets/issues/2767 | 963,002,120 | MDU6SXNzdWU5NjMwMDIxMjA= | 2,767 | equal operation to perform unbatch for huggingface datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2021-08-06T19:45:52Z | 2022-03-07T13:58:00Z | 2022-03-07T13:58:00Z | null | Hi
I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve:
I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to make it similar to what T5 originally did:
https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L925
Here please find an example:
For example, a typical example from ReCoRD might look like
{
'passsage': 'This is the passage.',
'query': 'A @placeholder is a bird.',
'entities': ['penguin', 'potato', 'pigeon'],
'answers': ['penguin', 'pigeon'],
}
and I need a prosessor which would turn this example into the following two examples:
{
'inputs': 'record query: A @placeholder is a bird. entities: penguin, '
'potato, pigeon passage: This is the passage.',
'targets': 'penguin',
}
and
{
'inputs': 'record query: A @placeholder is a bird. entities: penguin, '
'potato, pigeon passage: This is the passage.',
'targets': 'pigeon',
}
For doing this, one need unbatch, as each entry can map to multiple samples depending on the number of answers, I am not sure how to perform this operation with huggingface datasets library and greatly appreciate your help
@lhoestq
Thank you very much.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2767/timeline | null | completed | null | null | false | [
"Hi @lhoestq \r\nMaybe this is clearer to explain like this, currently map function, map one example to \"one\" modified one, lets assume we want to map one example to \"multiple\" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can ... |
https://api.github.com/repos/huggingface/datasets/issues/4104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4104/comments | https://api.github.com/repos/huggingface/datasets/issues/4104/events | https://github.com/huggingface/datasets/issues/4104 | 1,194,072,966 | I_kwDODunzps5HLBuG | 4,104 | Add time series data - stock market | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 8 | 2022-04-06T05:46:58Z | 2022-04-11T09:07:10Z | null | null | ## Adding a Time Series Dataset
- **Name:** 2min ticker data for stock market
- **Description:** 8 stocks' data collected for 1month post ukraine-russia war. 4 NSE stocks and 4 NASDAQ stocks. Along with technical indicators (additional features) as shown in below image
- **Data:** Collected by myself from investing.com
- **Motivation:** Test applicability of transformer based model on stock market / time series problem
 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4104/timeline | null | null | null | null | false | [
"Can I use instructions present in below link for time series dataset as well? \r\nhttps://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md ",
"cc'ing @kashif and @NielsRogge for visibility!",
"@INF800 happy to add this dataset! I will try to set a PR by the end of the day... if you can kindly poi... |
https://api.github.com/repos/huggingface/datasets/issues/270 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/270/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/270/comments | https://api.github.com/repos/huggingface/datasets/issues/270/events | https://github.com/huggingface/datasets/issues/270 | 638,121,617 | MDU6SXNzdWU2MzgxMjE2MTc= | 270 | c4 dataset is not viewable in nlpviewer demo | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 1 | 2020-06-13T08:26:16Z | 2020-10-27T15:35:29Z | 2020-10-27T15:35:13Z | null | I get the following error when I try to view the c4 dataset in [nlpviewer](https://huggingface.co/nlp/viewer/)
```python
ModuleNotFoundError: No module named 'langdetect'
Traceback:
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp_viewer/run.py", line 54, in <module>
configs = get_confs(option.id)
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 591, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/caching.py", line 575, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp_viewer/run.py", line 48, in get_confs
builder_cls = nlp.load.import_main_class(module_path, dataset=True)
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/load.py", line 57, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4.py", line 29, in <module>
from .c4_utils import (
File "/home/sasha/.local/lib/python3.7/site-packages/nlp/datasets/c4/88bb1b1435edad3fb772325710c4a43327cbf4a23b9030094556e6f01e14ec19/c4_utils.py", line 29, in <module>
import langdetect
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/270/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/270/timeline | null | completed | null | null | false | [
"C4 is too large to be shown in the viewer"
] |
https://api.github.com/repos/huggingface/datasets/issues/2108 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2108/comments | https://api.github.com/repos/huggingface/datasets/issues/2108/events | https://github.com/huggingface/datasets/issues/2108 | 840,181,055 | MDU6SXNzdWU4NDAxODEwNTU= | 2,108 | Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | open | false | null | 0 | 2021-03-24T21:32:16Z | 2021-03-25T06:31:43Z | null | null | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2108/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2108/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5233/comments | https://api.github.com/repos/huggingface/datasets/issues/5233/events | https://github.com/huggingface/datasets/pull/5233 | 1,447,906,868 | PR_kwDODunzps5C1JVh | 5,233 | Fix shards in IterableDataset.from_generator | [] | closed | false | null | 1 | 2022-11-14T11:42:09Z | 2022-11-14T14:16:03Z | 2022-11-14T14:13:22Z | null | Allow to define a sharded iterable dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5233/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5233/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5233.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5233",
"merged_at": "2022-11-14T14:13:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5233.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5233"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1331/comments | https://api.github.com/repos/huggingface/datasets/issues/1331/events | https://github.com/huggingface/datasets/pull/1331 | 759,677,189 | MDExOlB1bGxSZXF1ZXN0NTM0NjQwMzc5 | 1,331 | First version of the new dataset hausa_voa_topics | [] | closed | false | null | 0 | 2020-12-08T18:28:52Z | 2020-12-10T11:09:53Z | 2020-12-10T11:09:53Z | null | Contains loading script as well as dataset card including YAML tags.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1331/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1331",
"merged_at": "2020-12-10T11:09:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1331"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5898/comments | https://api.github.com/repos/huggingface/datasets/issues/5898/events | https://github.com/huggingface/datasets/issues/5898 | 1,726,190,481 | I_kwDODunzps5m45OR | 5,898 | Loading The flores data set for specific language | [] | closed | false | null | 1 | 2023-05-25T17:08:55Z | 2023-05-25T17:21:38Z | 2023-05-25T17:21:37Z | null | ### Describe the bug
I am trying to load the Flores data set
the code which is given is
```
from datasets import load_dataset
dataset = load_dataset("facebook/flores")
```
This gives the error of config name
""ValueError: Config name is missing"
Now if I add some config it gives me the some error
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
How I can load the data of the specific language ?
Couldn't find any tutorial
any one can help me out?
### Steps to reproduce the bug
step one load the data set
`from datasets import load_dataset
dataset = load_dataset("facebook/flores")`
it gives the error of config
once config is given
it gives the error of
"HFValidationError: Repo id must use alphanumeric chars or '-', '_', '.', '--' and '..' are forbidden, '-' and '.' cannot start or end the name, max length is 96: 'facebook/flores, 'ace_Arab''.
"
### Expected behavior
Data set should be loaded but I am receiving error
### Environment info
Datasets , python , | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5898/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5898/timeline | null | completed | null | null | false | [
"got that the syntax is like this\r\n\r\ndataset = load_dataset(\"facebook/flores\", \"ace_Arab\")"
] |
https://api.github.com/repos/huggingface/datasets/issues/5947 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5947/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5947/comments | https://api.github.com/repos/huggingface/datasets/issues/5947/events | https://github.com/huggingface/datasets/issues/5947 | 1,754,359,316 | I_kwDODunzps5okWYU | 5,947 | Return the audio filename when decoding fails due to corrupt files | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2023-06-13T08:44:09Z | 2023-06-14T12:45:01Z | null | null | ### Feature request
Return the audio filename when the audio decoding fails. Although currently there are some checks for mp3 and opus formats with the library version there are still cases when the audio decoding could fail, eg. Corrupt file.
### Motivation
When you try to load an object file dataset and the decoding fails you can't know which file is corrupt
```
raise LibsndfileError(err, prefix="Error opening {0!r}: ".format(self.name))
soundfile.LibsndfileError: Error opening <_io.BytesIO object at 0x7f5ab7e38290>: Format not recognised.
```
### Your contribution
Make a PR to Add exceptions for LIbsndfileError to return the audio filename or path when soundfile decoding fails. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5947/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5947/timeline | null | null | null | null | false | [
"Hi ! The audio data don't always exist as files on disk - the blobs are often stored in the Arrow files. For now I'd suggest disabling decoding with `.cast_column(\"audio\", Audio(decode=False))` and apply your own decoding that handles corrupted files (maybe to filter them out ?)\r\n\r\ncc @sanchit-gandhi since i... |
https://api.github.com/repos/huggingface/datasets/issues/4754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4754/comments | https://api.github.com/repos/huggingface/datasets/issues/4754/events | https://github.com/huggingface/datasets/pull/4754 | 1,319,681,541 | PR_kwDODunzps48L9p6 | 4,754 | Remove "unkown" language tags | [] | closed | false | null | 1 | 2022-07-27T14:50:12Z | 2022-07-27T15:03:00Z | 2022-07-27T14:51:06Z | null | Following https://github.com/huggingface/datasets/pull/4753 there was still a "unknown" langauge tag in `wikipedia` so the job at https://github.com/huggingface/datasets/runs/7542567336?check_suite_focus=true failed for wikipedia | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4754/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4754",
"merged_at": "2022-07-27T14:51:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4754"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1935 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1935/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1935/comments | https://api.github.com/repos/huggingface/datasets/issues/1935/events | https://github.com/huggingface/datasets/pull/1935 | 814,623,827 | MDExOlB1bGxSZXF1ZXN0NTc4NTgyMzk1 | 1,935 | add CoVoST2 | [] | closed | false | null | 1 | 2021-02-23T16:28:16Z | 2021-02-24T18:09:32Z | 2021-02-24T18:05:09Z | null | This PR adds the CoVoST2 dataset for speech translation and ASR.
https://github.com/facebookresearch/covost#covost-2
The dataset requires manual download as the download page requests an email address and the URLs are temporary.
The dummy data is a bit bigger because of the mp3 files and 36 configs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1935/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1935/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1935.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1935",
"merged_at": "2021-02-24T18:05:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1935.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1935"
} | true | [
"@patrickvonplaten \r\nI removed the mp3 files, dummy_data is much smaller now!"
] |
https://api.github.com/repos/huggingface/datasets/issues/169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/169/comments | https://api.github.com/repos/huggingface/datasets/issues/169/events | https://github.com/huggingface/datasets/pull/169 | 621,099,682 | MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw | 169 | Adding Qanta (Quizbowl) Dataset | [] | closed | false | null | 5 | 2020-05-19T16:03:01Z | 2020-05-26T12:52:31Z | 2020-05-26T12:52:31Z | null | This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold)
This partially continues a discussion around fixing dummy data from https://github.com/huggingface/nlp/issues/161
I ran the following code to double check that it works and did some sanity checks on the output. The majority of the code itself is from our `allennlp` version of the dataset reader.
```python
import nlp
# Default is full question
data = nlp.load_dataset('./datasets/qanta')
# Four configs
# Primarily useful for training
data = nlp.load_dataset('./datasets/qanta', 'mode=sentences,char_skip=25')
# Primarily used in evaluation
data = nlp.load_dataset('./datasets/qanta', 'mode=first,char_skip=25')
data = nlp.load_dataset('./datasets/qanta', 'mode=full,char_skip=25')
# Primarily useful in evaluation and "live" play
data = nlp.load_dataset('./datasets/qanta', 'mode=runs,char_skip=25')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/169/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/169/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/169.diff",
"html_url": "https://github.com/huggingface/datasets/pull/169",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/169.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/169"
} | true | [
"Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is cor... |
https://api.github.com/repos/huggingface/datasets/issues/6034 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6034/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6034/comments | https://api.github.com/repos/huggingface/datasets/issues/6034/events | https://github.com/huggingface/datasets/issues/6034 | 1,804,501,361 | I_kwDODunzps5rjoFx | 6,034 | load_dataset hangs on WSL | [] | closed | false | null | 3 | 2023-07-14T09:03:10Z | 2023-07-14T14:48:29Z | 2023-07-14T14:48:29Z | null | ### Describe the bug
load_dataset simply hangs. It happens once every ~5 times, and interestingly hangs for a multiple of 5 minutes (hangs for 5/10/15 minutes). Using the profiler in PyCharm shows that it spends the time at <method 'connect' of '_socket.socket' objects>. However, a local cache is available so I am not sure why socket is needed. ([profiler result](https://ibb.co/0Btbbp8))
It only happens on WSL for me. It works for native Windows and my MacBook. (cache quickly recognized and loaded within a second).
### Steps to reproduce the bug
I am using Ubuntu 22.04.2 LTS (GNU/Linux 5.15.90.1-microsoft-standard-WSL2 x86_64)
Python 3.10.10 (main, Mar 21 2023, 18:45:11) [GCC 11.2.0] on linux
>>> import datasets
>>> datasets.load_dataset('ai2_arc', 'ARC-Challenge') # hangs for 5/10/15 minutes
### Expected behavior
cache quickly recognized and loaded within a second
### Environment info
Please let me know if I should provide more environment information. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6034/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6034/timeline | null | completed | null | null | false | [
"Even if a dataset is cached, we still make requests to check whether the cache is up-to-date. [This](https://huggingface.co/docs/datasets/v2.13.1/en/loading#offline) section in the docs explains how to avoid them and directly load the cached version.",
"Thanks - that works! However it doesn't resolve the origina... |
https://api.github.com/repos/huggingface/datasets/issues/86 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/86/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/86/comments | https://api.github.com/repos/huggingface/datasets/issues/86/events | https://github.com/huggingface/datasets/pull/86 | 617,260,972 | MDExOlB1bGxSZXF1ZXN0NDE3MjEwNzY2 | 86 | [Load => load_dataset] change naming | [] | closed | false | null | 0 | 2020-05-13T08:43:00Z | 2020-05-13T08:50:58Z | 2020-05-13T08:50:57Z | null | Rename leftovers @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/86/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/86/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/86.diff",
"html_url": "https://github.com/huggingface/datasets/pull/86",
"merged_at": "2020-05-13T08:50:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/86.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/86"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/411/comments | https://api.github.com/repos/huggingface/datasets/issues/411/events | https://github.com/huggingface/datasets/pull/411 | 659,393,398 | MDExOlB1bGxSZXF1ZXN0NDUxMjQxOTQy | 411 | Sbf | [] | closed | false | null | 0 | 2020-07-17T16:19:45Z | 2020-07-21T09:13:46Z | 2020-07-21T09:13:45Z | null | This PR adds the Social Bias Frames Dataset (ACL 2020) .
dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/411/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/411.diff",
"html_url": "https://github.com/huggingface/datasets/pull/411",
"merged_at": "2020-07-21T09:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/411.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/411"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3524 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3524/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3524/comments | https://api.github.com/repos/huggingface/datasets/issues/3524/events | https://github.com/huggingface/datasets/pull/3524 | 1,093,826,723 | PR_kwDODunzps4wiK_v | 3,524 | Adding link to license. | [] | closed | false | null | 0 | 2022-01-04T23:11:48Z | 2022-01-05T12:31:38Z | 2022-01-05T12:31:37Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3524/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3524/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3524.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3524",
"merged_at": "2022-01-05T12:31:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3524.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3524"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/259/comments | https://api.github.com/repos/huggingface/datasets/issues/259/events | https://github.com/huggingface/datasets/issues/259 | 636,239,529 | MDU6SXNzdWU2MzYyMzk1Mjk= | 259 | documentation missing how to split a dataset | [] | closed | false | null | 7 | 2020-06-10T13:18:13Z | 2023-03-14T13:56:07Z | 2020-06-18T22:20:24Z | null | I am trying to understand how to split a dataset ( as arrow_dataset).
I know I can do something like this to access a split which is already in the original dataset :
`ds_test = nlp.load_dataset('imdb, split='test') `
But how can I split ds_test into a test and a validation set (without reading the data into memory and keeping the arrow_dataset as container)?
I guess it has something to do with the module split :-) but there is no real documentation in the code but only a reference to a longer description:
> See the [guide on splits](https://github.com/huggingface/nlp/tree/master/docs/splits.md) for more information.
But the guide seems to be missing.
To clarify: I know that this has been modelled after the dataset of tensorflow and that some of the documentation there can be used [like this one](https://www.tensorflow.org/datasets/splits). But to come back to the example above: I cannot simply split the testset doing this:
`ds_test = nlp.load_dataset('imdb, split='test'[:5000]) `
`ds_val = nlp.load_dataset('imdb, split='test'[5000:])`
because the imdb test data is sorted by class (probably not a good idea anyway)
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/259/timeline | null | completed | null | null | false | [
"this seems to work for my specific problem:\r\n\r\n`self.train_ds, self.test_ds, self.val_ds = map(_prepare_ds, ('train', 'test[:25%]+test[50%:75%]', 'test[75%:]'))`",
"Currently you can indeed split a dataset using `ds_test = nlp.load_dataset('imdb, split='test[:5000]')` (works also with percentages).\r\n\r\nHo... |
https://api.github.com/repos/huggingface/datasets/issues/1507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1507/comments | https://api.github.com/repos/huggingface/datasets/issues/1507/events | https://github.com/huggingface/datasets/pull/1507 | 763,857,872 | MDExOlB1bGxSZXF1ZXN0NTM4MTgyMzE2 | 1,507 | Add SelQA Dataset | [] | closed | false | null | 3 | 2020-12-12T13:58:07Z | 2020-12-16T16:49:23Z | 2020-12-16T16:49:23Z | null | Add the SelQA Dataset, a new benchmark for selection-based question answering tasks
Repo: https://github.com/emorynlp/selqa/
Paper: https://arxiv.org/pdf/1606.08513.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1507/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1507.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1507",
"merged_at": "2020-12-16T16:49:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1507.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1507"
} | true | [
"Hii please follow me",
"The CI error `FAILED tests/test_file_utils.py::TempSeedTest::test_tensorflow` is not related with this dataset and is fixed on master. You can ignore it",
"merging since the Ci is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3086/comments | https://api.github.com/repos/huggingface/datasets/issues/3086/events | https://github.com/huggingface/datasets/pull/3086 | 1,026,481,905 | PR_kwDODunzps4tNIvp | 3,086 | Remove _resampler from Audio fields | [] | closed | false | null | 0 | 2021-10-14T14:38:50Z | 2021-10-14T15:13:41Z | 2021-10-14T15:13:40Z | null | The `_resampler` Audio attribute was implemented to optimize audio resampling, but it should not be cached.
This PR removes `_resampler` from Audio fields, so that it is not returned by `fields()` or `asdict()`.
Fix #3083. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3086/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3086.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3086",
"merged_at": "2021-10-14T15:13:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3086.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3086"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1659/comments | https://api.github.com/repos/huggingface/datasets/issues/1659/events | https://github.com/huggingface/datasets/pull/1659 | 775,831,288 | MDExOlB1bGxSZXF1ZXN0NTQ2NDM1OTcy | 1,659 | update dataset info | [] | closed | false | null | 0 | 2020-12-29T10:58:01Z | 2020-12-30T16:55:07Z | 2020-12-30T16:55:07Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1659/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1659/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1659.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1659",
"merged_at": "2020-12-30T16:55:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1659.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1659"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/1292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1292/comments | https://api.github.com/repos/huggingface/datasets/issues/1292/events | https://github.com/huggingface/datasets/pull/1292 | 759,354,627 | MDExOlB1bGxSZXF1ZXN0NTM0Mzc0MzQ3 | 1,292 | arXiv dataset added | [] | closed | false | null | 0 | 2020-12-08T11:08:28Z | 2020-12-08T14:02:13Z | 2020-12-08T14:02:13Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1292/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1292",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1292"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/1395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1395/comments | https://api.github.com/repos/huggingface/datasets/issues/1395/events | https://github.com/huggingface/datasets/pull/1395 | 760,448,255 | MDExOlB1bGxSZXF1ZXN0NTM1Mjc4MTQ2 | 1,395 | Add WikiSource Dataset | [] | closed | false | null | 1 | 2020-12-09T15:52:06Z | 2020-12-14T10:24:14Z | 2020-12-14T10:24:13Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1395/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1395/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1395.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1395",
"merged_at": "2020-12-14T10:24:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1395.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1395"
} | true | [
"@lhoestq fixed :) "
] | |
https://api.github.com/repos/huggingface/datasets/issues/3260 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3260/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3260/comments | https://api.github.com/repos/huggingface/datasets/issues/3260/events | https://github.com/huggingface/datasets/pull/3260 | 1,052,247,373 | PR_kwDODunzps4ueCIU | 3,260 | Fix ConnectionError in Scielo dataset | [] | closed | false | null | 1 | 2021-11-12T18:02:37Z | 2021-11-16T18:18:17Z | 2021-11-16T17:55:22Z | null | This PR:
* allows 403 status code in HEAD requests to S3 buckets to fix the connection error in the Scielo dataset (instead of `url`, uses `response.url` to check the URL of the final endpoint)
* makes the Scielo dataset streamable
Fixes #3255. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3260/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3260/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3260.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3260",
"merged_at": "2021-11-16T17:55:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3260.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3260"
} | true | [
"The CI error is unrelated to the change."
] |
https://api.github.com/repos/huggingface/datasets/issues/5584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5584/comments | https://api.github.com/repos/huggingface/datasets/issues/5584/events | https://github.com/huggingface/datasets/issues/5584 | 1,601,821,808 | I_kwDODunzps5fedxw | 5,584 | Unable to load coyo700M dataset | [] | closed | false | null | 1 | 2023-02-27T19:35:03Z | 2023-02-28T07:27:59Z | 2023-02-28T07:27:58Z | null | ### Describe the bug
Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m:
```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.```
Full stack trace
```Downloading and preparing dataset parquet/kakaobrain--coyo-700m to /root/.cache/huggingface/datasets/kakaobrain___parquet/kakaobrain--coyo-700m-ae729692ae3e0073/0.0.0/2a3b91fbd88a2c90d1dbbb32b460cf621d31bd5b05b934492fdef7d8d6f236ec...
Downloading data files: 100%
1/1 [00:00<00:00, 63.35it/s]
Extracting data files: 100%
1/1 [00:00<00:00, 5.00it/s]
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1859 _time = time.time()
-> 1860 for _, table in generator:
1861 if max_shard_size is not None and writer._num_bytes > max_shard_size:
9 frames
ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
[/usr/local/lib/python3.8/dist-packages/datasets/builder.py](https://localhost:8080/#) in _prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1893
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset```
### Steps to reproduce the bug
```
from datasets import load_dataset
hf_dataset = load_dataset("kakaobrain/coyo-700m")
```
### Expected behavior
The above commands load the dataset successfully. Or handles exception and continue loading the remainder.
### Environment info
colab. any | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5584/timeline | null | completed | null | null | false | [
"Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README... |
https://api.github.com/repos/huggingface/datasets/issues/4229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4229/comments | https://api.github.com/repos/huggingface/datasets/issues/4229/events | https://github.com/huggingface/datasets/pull/4229 | 1,216,638,968 | PR_kwDODunzps421mjM | 4,229 | new task tag | [] | closed | false | null | 0 | 2022-04-27T00:47:08Z | 2022-04-27T00:48:28Z | 2022-04-27T00:48:17Z | null | multi-input-text-classification tag for classification datasets that take more than one input | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4229/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4229",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4229"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4013/comments | https://api.github.com/repos/huggingface/datasets/issues/4013/events | https://github.com/huggingface/datasets/issues/4013 | 1,180,427,174 | I_kwDODunzps5GW-Om | 4,013 | Cannot preview "hazal/Turkish-Biomedical-corpus-trM" | [] | closed | false | null | 2 | 2022-03-25T07:12:02Z | 2022-04-04T08:05:01Z | 2022-03-25T14:16:11Z | null | ## Dataset viewer issue for '*hazal/Turkish-Biomedical-corpus-trM'
**Link:** *https://huggingface.co/datasets/hazal/Turkish-Biomedical-corpus-trM*
*I cannot see the dataset preview.*
```
Server Error
Status code: 400
Exception: HTTPError
Message: 403 Client Error: Forbidden for url: https://huggingface.co/api/datasets/hazal/Turkish-Biomedical-corpus-trM?full=true
```
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4013/timeline | null | completed | null | null | false | [
"Hi @hazalturkmen, thanks for reporting.\r\n\r\nNote that your dataset repository does not contain any loading script; it only contains a data file named `tr_article_2`.\r\n\r\nWhen there is no loading script but only data files, the `datasets` library tries to infer how to load the data by looking at the data file... |
https://api.github.com/repos/huggingface/datasets/issues/5966 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5966/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5966/comments | https://api.github.com/repos/huggingface/datasets/issues/5966/events | https://github.com/huggingface/datasets/pull/5966 | 1,763,885,914 | PR_kwDODunzps5TXBLP | 5,966 | Fix JSON generation in benchmarks CI | [] | closed | false | null | 3 | 2023-06-19T16:56:06Z | 2023-06-19T17:29:11Z | 2023-06-19T17:22:10Z | null | Related to changes made in https://github.com/iterative/dvc/pull/9475 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5966/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5966/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5966.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5966",
"merged_at": "2023-06-19T17:22:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5966.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5966"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/1377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1377/comments | https://api.github.com/repos/huggingface/datasets/issues/1377/events | https://github.com/huggingface/datasets/pull/1377 | 760,309,435 | MDExOlB1bGxSZXF1ZXN0NTM1MTYyOTcz | 1,377 | adding marathi-wiki dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 3 | 2020-12-09T13:01:20Z | 2022-10-03T09:39:09Z | 2022-10-03T09:39:09Z | null | Adding marathi-wiki-articles dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1377/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1377/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1377.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1377",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1377.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1377"
} | true | [
"Can you make it a draft PR until you've added the dataset please ? @ekdnam ",
"Done",
"Thanks for your contribution, @ekdnam. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/dataset... |
https://api.github.com/repos/huggingface/datasets/issues/5165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5165/comments | https://api.github.com/repos/huggingface/datasets/issues/5165/events | https://github.com/huggingface/datasets/issues/5165 | 1,423,616,677 | I_kwDODunzps5U2qql | 5,165 | Memory explosion when trying to access 4d tensors in datasets cast to torch or np | [] | open | false | null | 0 | 2022-10-26T08:14:47Z | 2022-10-26T08:14:47Z | null | null | ### Describe the bug
When trying to access an item by index, in a datasets.Dataset cast to torch/np using `set_format` or `with_format`, we get a memory explosion if the item contains 4d (or above) tensors.
### Steps to reproduce the bug
MWE:
```python
from datasets import load_dataset
import numpy as np
def create_4d_tensor(item):
i = item["num_nodes"]
item["x_big"] = np.random.rand(i, 2*i, int(i/2), 1) + 1 # we create a big 4d tensor
return item
if __name__ == "__main__":
dataset = load_dataset(path=f"graphs-datasets/PROTEINS")
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset = dataset.map(
create_4d_tensor,
batched=False,
writer_batch_size=100,
)
# This works
print(dataset["train"].format)
print(dataset["train"][0].keys())
dataset.set_format("torch")
print(dataset["train"].format)
# This gets killed :(
print(dataset["train"][0].keys())
```
The problem likely comes from `format_table` [here](https://cs.github.com/huggingface/datasets/blob/f09f781be3278156ce3aa6ec90c1926b1846a78f/src/datasets/arrow_dataset.py#L2328)
### Expected behavior
No memory explosion when trying to access dataset items after cast.
### Environment info
- `datasets` version: 2.3.2
- Platform: Linux-5.14.0-1054-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 8.0.0
- Pandas version: 1.4.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5165/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5165/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5834/comments | https://api.github.com/repos/huggingface/datasets/issues/5834/events | https://github.com/huggingface/datasets/issues/5834 | 1,702,448,892 | I_kwDODunzps5leU78 | 5,834 | Is uint8 supported? | [] | closed | false | null | 5 | 2023-05-09T17:31:13Z | 2023-05-13T05:04:21Z | 2023-05-13T05:04:21Z | null | ### Describe the bug
I expect the dataset to store the data in the `uint8` data type, but it's returning `int64` instead.
While I've found that `datasets` doesn't yet support float16 (https://github.com/huggingface/datasets/issues/4981), I'm wondering if this is the case for other data types as well.
Is there a way to store vector data as `uint8` and then upload it to the hub?
### Steps to reproduce the bug
```python
from datasets import Features, Dataset, Sequence, Value
import numpy as np
dataset = Dataset.from_dict(
{"vector": [np.array([0, 1, 2], dtype=np.uint8)]}, features=Features({"vector": Sequence(Value("uint8"))})
).with_format("numpy")
print(dataset[0]["vector"].dtype)
```
### Expected behavior
Expected: `uint8`
Actual: `int64`
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-12.1-x86_64-i386-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.12.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5834/timeline | null | completed | null | null | false | [
"Hi ! The numpy formatting detaults to int64 and float32 - but you can use uint8 using\r\n```python\r\nds = ds.with_format(\"numpy\", dtype=np.uint8)\r\n```",
"Related to https://github.com/huggingface/datasets/issues/5517.",
"Thank you!\r\nBy setting `ds.with_format(\"numpy\", dtype=np.uint8)`, the dataset ret... |
https://api.github.com/repos/huggingface/datasets/issues/1719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1719/comments | https://api.github.com/repos/huggingface/datasets/issues/1719/events | https://github.com/huggingface/datasets/pull/1719 | 783,557,542 | MDExOlB1bGxSZXF1ZXN0NTUyODk3MzY4 | 1,719 | Fix column list comparison in transmit format | [] | closed | false | null | 0 | 2021-01-11T17:23:56Z | 2021-01-11T18:45:03Z | 2021-01-11T18:45:02Z | null | As noticed in #1718 the cache might not reload the cache files when new columns were added.
This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled column names.
I fixed that by sorting the columns for the comparison and added a test.
To properly test that I added a third column `col_3` to the dummy_dataset used for tests. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1719/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1719/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1719.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1719",
"merged_at": "2021-01-11T18:45:02Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1719.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1719"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2311/comments | https://api.github.com/repos/huggingface/datasets/issues/2311/events | https://github.com/huggingface/datasets/pull/2311 | 875,262,208 | MDExOlB1bGxSZXF1ZXN0NjI5NjQwNTMx | 2,311 | Add SLR52, SLR53 and SLR54 to OpenSLR | [] | closed | false | null | 2 | 2021-05-04T09:08:03Z | 2021-05-07T09:50:55Z | 2021-05-07T09:50:55Z | null | Add large speech datasets for Sinhala, Bengali and Nepali. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2311/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2311",
"merged_at": "2021-05-07T09:50:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2311"
} | true | [
"Hi @lhoestq , I am not sure about the error message:\r\n```\r\n#!/bin/bash -eo pipefail\r\n./scripts/datasets_metadata_validator.py\r\nWARNING:root:❌ Failed to validate 'datasets/openslr/README.md':\r\n__init__() got an unexpected keyword argument 'SLR32'\r\nINFO:root:❌ Failed on 1 files.\r\n\r\nExited with code e... |
https://api.github.com/repos/huggingface/datasets/issues/985 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/985/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/985/comments | https://api.github.com/repos/huggingface/datasets/issues/985/events | https://github.com/huggingface/datasets/pull/985 | 755,020,564 | MDExOlB1bGxSZXF1ZXN0NTMwODEyNTM1 | 985 | Add GAP dataset | [] | closed | false | null | 3 | 2020-12-02T07:25:11Z | 2022-10-06T14:11:52Z | 2020-12-02T16:16:32Z | null | GAP dataset
Gender bias coreference resolution | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/985/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/985/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/985.diff",
"html_url": "https://github.com/huggingface/datasets/pull/985",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/985.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/985"
} | true | [
"This dataset already exists apparently, sorry :/ \r\nsee\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/gap/gap.py\r\n\r\nFeel free to re-use the dataset card you did for `/datasets/gap`\r\n",
"oh heck, my bad 🤦♂️ sorry",
"I think you should also delete this branch."
] |
https://api.github.com/repos/huggingface/datasets/issues/3192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3192/comments | https://api.github.com/repos/huggingface/datasets/issues/3192/events | https://github.com/huggingface/datasets/issues/3192 | 1,041,308,086 | I_kwDODunzps4-ERm2 | 3,192 | Multiprocessing filter/map (tests) not working on Windows | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2021-11-01T15:36:08Z | 2021-11-01T15:57:03Z | null | null | While running the tests, I found that the multiprocessing examples fail on Windows, or rather they do not complete: they cause a deadlock. I haven't dug deep into it, but they do not seem to work as-is. I currently have no time to tests this in detail but at least the tests seem not to run correctly (deadlocking).
## Steps to reproduce the bug
```shell
pytest tests/test_arrow_dataset.py -k "test_filter_multiprocessing"
pytest tests/test_arrow_dataset.py -k "test_map_multiprocessing"
```
## Expected results
The functionality to work on all platforms.
## Actual results
Deadlock.
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2, also tested with 3.7.9
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3192/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1875/comments | https://api.github.com/repos/huggingface/datasets/issues/1875/events | https://github.com/huggingface/datasets/pull/1875 | 807,887,267 | MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0 | 1,875 | Adding sari metric | [] | closed | false | null | 0 | 2021-02-14T04:38:35Z | 2021-02-17T15:56:27Z | 2021-02-17T15:56:27Z | null | Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1875/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1875/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1875.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1875",
"merged_at": "2021-02-17T15:56:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1875.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1875"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3916/comments | https://api.github.com/repos/huggingface/datasets/issues/3916/events | https://github.com/huggingface/datasets/pull/3916 | 1,168,869,191 | PR_kwDODunzps40a-cR | 3,916 | Create README.md for GLUE | [] | closed | false | null | 1 | 2022-03-14T20:27:22Z | 2022-03-15T17:06:57Z | 2022-03-15T17:06:56Z | null | I still have a hesitation regarding the format of inputs -- whether it's a list or a list of lists? -- hopefully @lhoestq will be able to clarify.
Also tagging @yjernite for the Limitations section. Happy to hear your thoughts! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3916/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3916.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3916",
"merged_at": "2022-03-15T17:06:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3916.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3916"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3916). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/982/comments | https://api.github.com/repos/huggingface/datasets/issues/982/events | https://github.com/huggingface/datasets/pull/982 | 754,946,337 | MDExOlB1bGxSZXF1ZXN0NTMwNzUxMzYx | 982 | add prachathai67k take2 | [] | closed | false | null | 0 | 2020-12-02T05:12:01Z | 2020-12-02T10:18:11Z | 2020-12-02T10:18:11Z | null | I decided it will be faster to create a new pull request instead of fixing the rebase issues.
continuing from https://github.com/huggingface/datasets/pull/954
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/982/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/982.diff",
"html_url": "https://github.com/huggingface/datasets/pull/982",
"merged_at": "2020-12-02T10:18:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/982.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/982"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1662/comments | https://api.github.com/repos/huggingface/datasets/issues/1662/events | https://github.com/huggingface/datasets/issues/1662 | 775,890,154 | MDU6SXNzdWU3NzU4OTAxNTQ= | 1,662 | Arrow file is too large when saving vector data | [] | closed | false | null | 4 | 2020-12-29T13:23:12Z | 2021-01-21T14:12:39Z | 2021-01-21T14:12:39Z | null | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1662/timeline | null | completed | null | null | false | [
"Hi !\r\nThe arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is\r\n\r\n20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB\r\n\r\nIf you want to reduce the size you can consider using quantization for example, or maybe using dimensi... |
https://api.github.com/repos/huggingface/datasets/issues/4051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4051/comments | https://api.github.com/repos/huggingface/datasets/issues/4051/events | https://github.com/huggingface/datasets/issues/4051 | 1,184,400,179 | I_kwDODunzps5GmIMz | 4,051 | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py | [] | closed | false | null | 5 | 2022-03-29T07:00:31Z | 2022-05-08T07:27:32Z | 2022-03-29T08:29:25Z | null | Hi, I meet a problem.
When I run the code:
`dataset = load_dataset('glue','sst2')`
There is a issue raising:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.0.0/datasets/glue/glue.py
I don't know why; it is ok when I use Google Chrome to view this url.
Thanks for your help! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4051/timeline | null | completed | null | null | false | [
"Hi @klyuhang9,\r\n\r\nI'm sorry but I can't reproduce your problem:\r\n```python\r\nIn [4]: ds = load_dataset(\"glue\", \"sst2\", download_mode=\"force_redownload\")\r\nDownloading builder script: 28.8kB [00:00, 9.15MB/s] ... |
https://api.github.com/repos/huggingface/datasets/issues/1930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1930/comments | https://api.github.com/repos/huggingface/datasets/issues/1930/events | https://github.com/huggingface/datasets/pull/1930 | 814,055,198 | MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0 | 1,930 | updated the wino_bias dataset | [] | closed | false | null | 3 | 2021-02-23T03:07:40Z | 2021-04-07T15:24:56Z | 2021-04-07T15:24:56Z | null | Updated the wino_bias.py script.
- updated the data_url
- added different configurations for different data splits
- added the coreference_cluster to the data features | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1930/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1930/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1930.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1930",
"merged_at": "2021-04-07T15:24:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1930.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1930"
} | true | [
"Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !",
"> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will... |
https://api.github.com/repos/huggingface/datasets/issues/3136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3136/comments | https://api.github.com/repos/huggingface/datasets/issues/3136/events | https://github.com/huggingface/datasets/pull/3136 | 1,033,360,396 | PR_kwDODunzps4tieFi | 3,136 | Fix script of Arabic Billion Words dataset to return all data | [] | closed | false | null | 0 | 2021-10-22T09:14:24Z | 2021-10-22T13:28:41Z | 2021-10-22T13:28:40Z | null | The script has a bug and only parses and generates a portion of the entire dataset.
This PR fixes the loading script so that is properly parses the entire dataset.
Current implementation generates the same number of examples as reported in the [original paper](https://arxiv.org/abs/1611.04033) for all configurations except for one:
- For "Youm7" we generate more examples (1172136) than the ones reported by the paper (1025027)
| | Number of examples | Number of examples according to the source |
|:---------------|-------------------:|-----:|
| Alittihad | 349342 |349342 |
| Almasryalyoum | 291723 |291723 |
| Almustaqbal | 446873 |446873 |
| Alqabas | 817274 |817274 |
| Echoroukonline | 139732 |139732 |
| Ryiadh | 858188 | 858188 |
| Sabanews | 92149 |92149 |
| SaudiYoum | 888068 |888068 |
| Techreen | 314597 |314597 |
| Youm7 | 1172136 |1025027 |
Fix #3126. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3136/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3136",
"merged_at": "2021-10-22T13:28:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3136"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2536/comments | https://api.github.com/repos/huggingface/datasets/issues/2536/events | https://github.com/huggingface/datasets/issues/2536 | 927,338,639 | MDU6SXNzdWU5MjczMzg2Mzk= | 2,536 | Use `Audio` features for `AutomaticSpeechRecognition` task template | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2021-06-22T15:07:21Z | 2022-06-01T17:18:16Z | 2022-06-01T17:18:16Z | null | In #2533 we added a task template for speech recognition that relies on the file paths to the audio files. As pointed out by @SBrandeis this is brittle as it doesn't port easily across different OS'.
The solution is to use dedicated `Audio` features when casting the dataset. These features are not yet available in `datasets`, but should be included in the `AutomaticSpeechRecognition` template once they are. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2536/timeline | null | completed | null | null | false | [
"I'm just retaking and working on #2324. 😉 ",
"Resolved via https://github.com/huggingface/datasets/pull/4006."
] |
https://api.github.com/repos/huggingface/datasets/issues/704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/704/comments | https://api.github.com/repos/huggingface/datasets/issues/704/events | https://github.com/huggingface/datasets/pull/704 | 713,572,556 | MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0 | 704 | Fix remote tests for new datasets | [] | closed | false | null | 0 | 2020-10-02T12:08:04Z | 2020-10-02T12:12:02Z | 2020-10-02T12:12:01Z | null | When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet)
To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/704/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/704/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/704",
"merged_at": "2020-10-02T12:12:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/704"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1738/comments | https://api.github.com/repos/huggingface/datasets/issues/1738/events | https://github.com/huggingface/datasets/pull/1738 | 786,068,440 | MDExOlB1bGxSZXF1ZXN0NTU0OTk2NDU4 | 1,738 | Conda support | [] | closed | false | null | 3 | 2021-01-14T15:11:25Z | 2021-01-15T10:08:20Z | 2021-01-15T10:08:19Z | null | Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).
Will appear here: https://anaconda.org/huggingface/datasets
Depends on `conda-forge` for now, so the following is required for installation:
```
conda install -c huggingface -c conda-forge datasets
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 4,
"total_count": 4,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1738/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1738",
"merged_at": "2021-01-15T10:08:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1738"
} | true | [
"Nice thanks :) \r\nNote that in `datasets` the tags are simply the version without the `v`. For example `1.2.1`.",
"Do you push tags only for versions?",
"Yes I've always used tags only for versions"
] |
https://api.github.com/repos/huggingface/datasets/issues/5191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5191/comments | https://api.github.com/repos/huggingface/datasets/issues/5191/events | https://github.com/huggingface/datasets/pull/5191 | 1,433,191,658 | PR_kwDODunzps5CD0Qp | 5,191 | Make torch.Tensor and spacy models cacheable | [] | closed | false | null | 1 | 2022-11-02T13:56:18Z | 2022-11-02T17:20:48Z | 2022-11-02T17:18:42Z | null | Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models.
Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/3178
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5191/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5191.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5191",
"merged_at": "2022-11-02T17:18:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5191.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5191"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4640/comments | https://api.github.com/repos/huggingface/datasets/issues/4640/events | https://github.com/huggingface/datasets/pull/4640 | 1,295,495,699 | PR_kwDODunzps4660rI | 4,640 | Support all split in streaming mode | [] | open | false | null | 1 | 2022-07-06T08:56:38Z | 2022-07-06T15:19:55Z | null | null | Fix #4637. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4640/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4640",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4640"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4640). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/3332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3332/comments | https://api.github.com/repos/huggingface/datasets/issues/3332/events | https://github.com/huggingface/datasets/pull/3332 | 1,065,345,853 | PR_kwDODunzps4vGBig | 3,332 | Fix error message and add extension fallback | [] | closed | false | null | 0 | 2021-11-28T14:25:29Z | 2021-11-29T13:34:15Z | 2021-11-29T13:34:14Z | null | Fix the error message raised if `infered_module_name` is `None` in `CommunityDatasetModuleFactoryWithoutScript.get_module` and make `infer_module_for_data_files` more robust.
In the linked issue, `infer_module_for_data_files` returns `None` because `json` is the second most common extension due to the suffix ordering. Now, we go from the most common to the least common extension and try to map it or return `None`.
Fix #3331 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3332/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3332.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3332",
"merged_at": "2021-11-29T13:34:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3332.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3332"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5439/comments | https://api.github.com/repos/huggingface/datasets/issues/5439/events | https://github.com/huggingface/datasets/issues/5439 | 1,537,973,564 | I_kwDODunzps5bq508 | 5,439 | [dataset request] Add Common Voice 12.0 | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2023-01-18T13:07:05Z | 2023-07-21T14:26:10Z | 2023-07-21T14:26:09Z | null | ### Feature request
Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets.
### Motivation
The dataset link:
https://commonvoice.mozilla.org/en/datasets
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5439/timeline | null | completed | null | null | false | [
"@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?",
"This dataset is now hosted on the Hub here: https://huggingface.co/datasets/mozilla-foundation/common_voice_12_0"
] |
https://api.github.com/repos/huggingface/datasets/issues/3131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3131/comments | https://api.github.com/repos/huggingface/datasets/issues/3131/events | https://github.com/huggingface/datasets/issues/3131 | 1,032,309,865 | I_kwDODunzps49h8xp | 3,131 | Add ADE20k | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | closed | false | null | 1 | 2021-10-21T10:13:09Z | 2023-01-27T14:40:20Z | 2023-01-27T14:40:20Z | null | ## Adding a Dataset
- **Name:** ADE20k (actually it's called the MIT Scene Parsing Benchmark, it's actually a subset of ADE20k but a lot of authors still call it ADE20k)
- **Description:** A semantic segmentation dataset, consisting of 150 classes.
- **Paper:** http://people.csail.mit.edu/bzhou/publication/scene-parse-camera-ready.pdf
- **Data:** http://sceneparsing.csail.mit.edu/
- **Motivation:** I am currently adding Transformer-based semantic segmentation models that achieve SOTA on this dataset. It would be great to directly access this dataset using HuggingFace Datasets, in order to make example scripts in HuggingFace Transformers.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3131/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3131/timeline | null | completed | null | null | false | [
"I think we can close this issue since PR [#3607](https://github.com/huggingface/datasets/pull/3607) solves this."
] |
https://api.github.com/repos/huggingface/datasets/issues/3981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3981/comments | https://api.github.com/repos/huggingface/datasets/issues/3981/events | https://github.com/huggingface/datasets/pull/3981 | 1,175,423,517 | PR_kwDODunzps40vfra | 3,981 | Add TER metric card | [] | closed | false | null | 1 | 2022-03-21T13:54:36Z | 2022-03-29T13:57:11Z | 2022-03-29T13:51:40Z | null | Add TER metric card
This card is still missing content for the following sections:
- **Limitations & Biases**
- **Values from Papers**
If anyone has any ideas for either of the above, feel free to either add them or point me to them and I'll add them! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3981/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3981.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3981",
"merged_at": "2022-03-29T13:51:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3981.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3981"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/6071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6071/comments | https://api.github.com/repos/huggingface/datasets/issues/6071/events | https://github.com/huggingface/datasets/issues/6071 | 1,821,990,749 | I_kwDODunzps5smV9d | 6,071 | storage_options provided to load_dataset not fully piping through since datasets 2.14.0 | [] | closed | false | null | 2 | 2023-07-26T09:37:20Z | 2023-07-27T12:42:58Z | 2023-07-27T12:42:58Z | null | ### Describe the bug
Since the latest release of `datasets` (`2.14.0`), custom filesystem `storage_options` passed to `load_dataset()` do not seem to propagate through all the way - leading to problems if loading data files that need those options to be set.
I think this is because of the new `_prepare_path_and_storage_options()` (https://github.com/huggingface/datasets/pull/6028), which returns the right `storage_options` to use given a path and a `DownloadConfig` - but which might not be taking into account the extra `storage_options` explicitly provided e.g. through `load_dataset()`
### Steps to reproduce the bug
```python
import fsspec
import pandas as pd
import datasets
# Generate mock parquet file
data_files = "demo.parquet"
pd.DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}).to_parquet(data_files)
_storage_options = {"x": 1, "y": 2}
fs = fsspec.filesystem("file", **_storage_options)
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options
)
```
Looking at the `storage_options` resolved here:
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L331
they end up being `{}`, instead of propagating through the `storage_options` that were provided to `load_dataset` (`fs.storage_options`). As these then get used for the filesystem operation a few lines below
https://github.com/huggingface/datasets/blob/b0177910b32712f28d147879395e511207e39958/src/datasets/data_files.py#L339
the call will fail if the user-provided `storage_options` were needed.
---
A temporary workaround that seemed to work locally to bypass the problem was to bundle a duplicate of the `storage_options` into the `download_config`, so that they make their way all the way to `_prepare_path_and_storage_options()` and get extracted correctly:
```python
dataset = datasets.load_dataset(
"parquet",
data_files=data_files,
storage_options=fs.storage_options,
download_config=datasets.DownloadConfig(storage_options={fs.protocol: fs.storage_options}),
)
```
### Expected behavior
`storage_options` provided to `load_dataset` take effect in all backend filesystem operations.
### Environment info
datasets==2.14.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6071/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting, I opened a PR to fix this\r\n\r\nWhat filesystem are you using ?",
"Hi @lhoestq ! Thank you so much 🙌 \r\n\r\nIt's a bit of a custom setup, but in practice I am using a [pyarrow.fs.S3FileSystem](https://arrow.apache.org/docs/python/generated/pyarrow.fs.S3FileSystem.html) (wrapped in a... |
https://api.github.com/repos/huggingface/datasets/issues/6022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6022/comments | https://api.github.com/repos/huggingface/datasets/issues/6022/events | https://github.com/huggingface/datasets/issues/6022 | 1,800,092,589 | I_kwDODunzps5rSzut | 6,022 | Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int' | [] | closed | false | null | 1 | 2023-07-12T03:20:17Z | 2023-07-12T16:18:06Z | 2023-07-12T16:18:05Z | null | ### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1328, in _write_generator_to_queue
for i, result in enumerate(func(**kwargs)):
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3483, in _map_single
writer.write_batch(batch)
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_writer.py", line 549, in write_batch
array = cast_array_to_feature(col_values, col_type) if col_type is not None else col_values
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 1831, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/table.py", line 2063, in cast_array_to_feature
return feature.cast_storage(array)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/features/features.py", line 1098, in cast_storage
if min_max["max"] >= self.num_classes:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/t1.py", line 33, in <module>
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 850, in map
{
File "/Users/codingl2k1/Work/datasets/src/datasets/dataset_dict.py", line 851, in <dictcomp>
k: dataset.map(
^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 577, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 542, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/src/datasets/arrow_dataset.py", line 3179, in map
for rank, done, content in iflatmap_unordered(
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in iflatmap_unordered
[async_result.get(timeout=0.05) for async_result in async_results]
File "/Users/codingl2k1/Work/datasets/src/datasets/utils/py_utils.py", line 1368, in <listcomp>
[async_result.get(timeout=0.05) for async_result in async_results]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 774, in get
raise self._value
TypeError: '>=' not supported between instances of 'NoneType' and 'int'
```
### Steps to reproduce the bug
1. Checkout the latest main of datasets.
2. Run the code:
```python
from datasets import load_dataset
def transforms(examples):
# examples["pixel_values"] = [image.convert("RGB").resize((100, 100)) for image in examples["image"]]
return examples
ds = load_dataset("scene_parse_150")
ds = ds.map(transforms, num_proc=14, batched=True, batch_size=5)
print(ds)
```
### Expected behavior
map without exception.
### Environment info
Datasets: https://github.com/huggingface/datasets/commit/b8067c0262073891180869f700ebef5ac3dc5cce
Python: 3.11.4
System: Macos | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6022/timeline | null | completed | null | null | false | [
"Thanks for reporting! I've opened a PR with a fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/1505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1505/comments | https://api.github.com/repos/huggingface/datasets/issues/1505/events | https://github.com/huggingface/datasets/pull/1505 | 763,750,773 | MDExOlB1bGxSZXF1ZXN0NTM4MTEyMTk5 | 1,505 | add ilist dataset | [] | closed | false | null | 0 | 2020-12-12T12:44:12Z | 2020-12-17T15:43:07Z | 2020-12-17T15:43:07Z | null | This PR will add Indo-Aryan Language Identification Shared Task Dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1505/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1505.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1505",
"merged_at": "2020-12-17T15:43:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1505.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1505"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3713/comments | https://api.github.com/repos/huggingface/datasets/issues/3713/events | https://github.com/huggingface/datasets/pull/3713 | 1,135,692,572 | PR_kwDODunzps4yso6D | 3,713 | Rm sphinx doc | [] | closed | false | null | 2 | 2022-02-13T11:26:31Z | 2022-02-17T10:18:46Z | 2022-02-17T10:12:09Z | null | Checklist
- [x] Update circle ci yaml
- [x] Delete sphinx static & python files in docs dir
- [x] Update readme in docs dir
- [ ] Update docs config in setup.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3713/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3713/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3713.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3713",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3713.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3713"
} | true | [
"Thanks for pushing this :)\r\nOne minor comment regarding the PR itself - I noticed that some changes are coming from the upstream master, this might be due to a rebase. Would be nice if this PR doesn't include them for readabily, feel free to open a new one if necessary",
"Closing in favour https://github.com/h... |
https://api.github.com/repos/huggingface/datasets/issues/1631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1631/comments | https://api.github.com/repos/huggingface/datasets/issues/1631/events | https://github.com/huggingface/datasets/pull/1631 | 774,349,222 | MDExOlB1bGxSZXF1ZXN0NTQ1Mjc5MTE2 | 1,631 | Update README.md | [] | closed | false | null | 0 | 2020-12-24T11:45:52Z | 2020-12-28T17:35:41Z | 2020-12-28T17:16:04Z | null | I made small change for citation | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1631/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1631.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1631",
"merged_at": "2020-12-28T17:16:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1631.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1631"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/508/comments | https://api.github.com/repos/huggingface/datasets/issues/508/events | https://github.com/huggingface/datasets/issues/508 | 679,705,734 | MDU6SXNzdWU2Nzk3MDU3MzQ= | 508 | TypeError: Receiver() takes no arguments | [] | closed | false | null | 5 | 2020-08-16T07:18:16Z | 2020-09-01T14:53:33Z | 2020-09-01T14:49:03Z | null | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
This fails in the apache beam runner.
```
Traceback (most recent call last):
File "D:/ML/wikiembedding/gpt2_sv.py", line 36, in <module>
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=my_cache_dir, beam_runner='DirectRunner')
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\load.py", line 548, in load_dataset
builder_instance.download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 462, in download_and_prepare
self._download_and_prepare(
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\nlp\builder.py", line 969, in _download_and_prepare
pipeline_results = pipeline.run()
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\pipeline.py", line 534, in run
return self.runner.run_pipeline(self, self._options)
....
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 218, in process_encoded
self.output(decoded_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\apache_beam\runners\worker\operations.py", line 332, in output
cython.cast(Receiver, self.receivers[output_index]).receive(windowed_value)
File "C:\Users\seto\AppData\Local\Programs\Python\Python38\lib\site-packages\Cython\Shadow.py", line 167, in cast
return type(*args)
TypeError: Receiver() takes no arguments
```
This is run on a Windows 10 machine with python 3.8. I get the same error loading the swedish wikipedia dump. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/508/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/508/timeline | null | completed | null | null | false | [
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a du... |
https://api.github.com/repos/huggingface/datasets/issues/2056 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2056/comments | https://api.github.com/repos/huggingface/datasets/issues/2056/events | https://github.com/huggingface/datasets/issues/2056 | 831,718,397 | MDU6SXNzdWU4MzE3MTgzOTc= | 2,056 | issue with opus100/en-fr dataset | [] | closed | false | null | 3 | 2021-03-15T11:32:42Z | 2021-03-16T15:49:00Z | 2021-03-16T15:48:59Z | null | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s]
Traceback (most recent call last):
File "run_mlm.py", line 550, in <module>
main()
File "run_mlm.py", line 412, in main
in zip(data_args.dataset_name, data_args.dataset_config_name)]
File "run_mlm.py", line 411, in <listcomp>
logger) for dataset_name, dataset_config_name\
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset
load_from_cache_file=not data_args.overwrite_cache,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp>
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map
update_data=update_data,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single
batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617
` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2056/timeline | null | completed | null | null | false | [
"@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ",
"Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers impor... |
https://api.github.com/repos/huggingface/datasets/issues/1047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1047/comments | https://api.github.com/repos/huggingface/datasets/issues/1047/events | https://github.com/huggingface/datasets/pull/1047 | 756,127,490 | MDExOlB1bGxSZXF1ZXN0NTMxNzIyMjk4 | 1,047 | Add KorNLU | [] | closed | false | null | 5 | 2020-12-03T11:50:54Z | 2020-12-03T17:17:07Z | 2020-12-03T17:16:09Z | null | Added Korean NLU datasets. The link to the dataset can be found [here](https://github.com/kakaobrain/KorNLUDatasets) and the paper can be found [here](https://arxiv.org/abs/2004.03289)
**Note**: The MNLI tsv file is broken, so this code currently excludes the file. Please suggest other alternative if any @lhoestq
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1047/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1047/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1047.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1047",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1047.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1047"
} | true | [
"the CI error about `social_bias_frames` is fixed on master so it's fine",
"created new [PR](https://github.com/huggingface/datasets/pull/1062)",
"looks like this PR includes many changes to other files that the ones related to KorNLU\r\nCould you create another branch and another PR please ?",
"Wow crazy tim... |
https://api.github.com/repos/huggingface/datasets/issues/2140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2140/comments | https://api.github.com/repos/huggingface/datasets/issues/2140/events | https://github.com/huggingface/datasets/pull/2140 | 843,830,451 | MDExOlB1bGxSZXF1ZXN0NjAzMTYxMjYx | 2,140 | add banking77 dataset | [] | closed | false | null | 1 | 2021-03-29T21:32:23Z | 2021-04-09T09:32:18Z | 2021-04-09T09:32:18Z | null | Intent classification/detection dataset from banking category with 77 unique intents. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2140/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2140/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2140.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2140",
"merged_at": "2021-04-09T09:32:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2140.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2140"
} | true | [
"@lhoestq I updated files"
] |
https://api.github.com/repos/huggingface/datasets/issues/1651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1651/comments | https://api.github.com/repos/huggingface/datasets/issues/1651/events | https://github.com/huggingface/datasets/pull/1651 | 775,554,319 | MDExOlB1bGxSZXF1ZXN0NTQ2MjExMjQw | 1,651 | Add twi wordsim353 | [] | closed | false | null | 3 | 2020-12-28T19:31:55Z | 2021-01-04T09:39:39Z | 2021-01-04T09:39:38Z | null | Added the citation information to the README file | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1651/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1651/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1651.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1651",
"merged_at": "2021-01-04T09:39:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1651.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1651"
} | true | [
"Well actually it looks like it was already added in #1428 \r\n\r\nMaybe we can close this one ? Or you wanted to make changes to this dataset ?",
"Thank you, it's just a modification of Readme. I added the missing citation.",
"Indeed thanks"
] |
https://api.github.com/repos/huggingface/datasets/issues/5697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5697/comments | https://api.github.com/repos/huggingface/datasets/issues/5697/events | https://github.com/huggingface/datasets/pull/5697 | 1,651,812,614 | PR_kwDODunzps5NefxZ | 5,697 | Raise an error on missing distributed seed | [] | closed | false | null | 4 | 2023-04-03T10:44:58Z | 2023-04-04T15:05:24Z | 2023-04-04T14:58:16Z | null | close https://github.com/huggingface/datasets/issues/5696 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5697/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5697/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5697.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5697",
"merged_at": "2023-04-04T14:58:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5697.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5697"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3428 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3428/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3428/comments | https://api.github.com/repos/huggingface/datasets/issues/3428/events | https://github.com/huggingface/datasets/pull/3428 | 1,078,863,468 | PR_kwDODunzps4vxtNT | 3,428 | Clean squad dummy data | [] | closed | false | null | 0 | 2021-12-13T18:46:29Z | 2021-12-13T18:57:50Z | 2021-12-13T18:57:50Z | null | Some unused files were remaining, this PR removes them. We just need to keep the dummy_data.zip file | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3428/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3428/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3428.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3428",
"merged_at": "2021-12-13T18:57:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3428.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3428"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1245/comments | https://api.github.com/repos/huggingface/datasets/issues/1245/events | https://github.com/huggingface/datasets/pull/1245 | 758,411,233 | MDExOlB1bGxSZXF1ZXN0NTMzNTg4NDUw | 1,245 | Add Google Turkish Treebank Dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 1 | 2020-12-07T11:09:17Z | 2022-10-03T09:39:32Z | 2022-10-03T09:39:32Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1245/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1245.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1245",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1245.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1245"
} | true | [
"Thanks for your contribution, @abhishekkrthakur. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tel... |
https://api.github.com/repos/huggingface/datasets/issues/1763 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1763/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1763/comments | https://api.github.com/repos/huggingface/datasets/issues/1763/events | https://github.com/huggingface/datasets/pull/1763 | 791,389,763 | MDExOlB1bGxSZXF1ZXN0NTU5NDU3MTY1 | 1,763 | PAWS-X: Fix csv Dictreader splitting data on quotes | [] | closed | false | null | 0 | 2021-01-21T18:21:01Z | 2021-01-22T10:14:33Z | 2021-01-22T10:13:45Z | null |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
```
changed `data = csv.DictReader(f, delimiter="\t")` to `data = csv.DictReader(f, delimiter="\t", quoting=csv.QUOTE_NONE)` in the dataloader to make csv module not split by quotes.
The results are as expected for all languages after the change. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1763/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1763/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"merged_at": "2021-01-22T10:13:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1763"
} | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.