url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2564/comments | https://api.github.com/repos/huggingface/datasets/issues/2564/events | https://github.com/huggingface/datasets/issues/2564 | 932,389,639 | MDU6SXNzdWU5MzIzODk2Mzk= | 2,564 | concatenate_datasets for iterable datasets | [] | closed | false | null | 2 | 2021-06-29T08:59:41Z | 2022-06-28T21:15:04Z | 2022-06-28T21:15:04Z | null | Currently `concatenate_datasets` only works for map-style `Dataset`.
It would be nice to have it work for `IterableDataset` objects as well.
It would simply chain the iterables of the iterable datasets. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2564/timeline | null | completed | null | null | false | [
"It is probably worth noting here that the [documentation](https://huggingface.co/docs/datasets/process#concatenate) is misleading (indicating that it does work for IterableDatasets):\r\n\r\n> You can also mix several datasets together by taking alternating examples from each one to create a new dataset. This is kn... |
https://api.github.com/repos/huggingface/datasets/issues/1278 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1278/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1278/comments | https://api.github.com/repos/huggingface/datasets/issues/1278/events | https://github.com/huggingface/datasets/pull/1278 | 758,988,465 | MDExOlB1bGxSZXF1ZXN0NTM0MDYwNDY5 | 1,278 | Craigslist bargains | [] | closed | false | null | 2 | 2020-12-08T01:45:55Z | 2020-12-09T00:46:15Z | 2020-12-09T00:46:15Z | null | `craigslist_bargains` dataset from [here](https://worksheets.codalab.org/worksheets/0x453913e76b65495d8b9730d41c7e0a0c/) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1278/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1278/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1278.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1278",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1278.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1278"
} | true | [
"Seeing this in the CircleCI builds, this is what I was originally getting before I started messing around with the download URLS to try to fix this:\r\n\r\n`FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpwvji917g/extracted/d6185140afb24ad8fee67392100a478269cba286b0d88915a137fdf88872de14/dummy_dat... |
https://api.github.com/repos/huggingface/datasets/issues/3442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3442/comments | https://api.github.com/repos/huggingface/datasets/issues/3442/events | https://github.com/huggingface/datasets/pull/3442 | 1,081,862,747 | PR_kwDODunzps4v7oBZ | 3,442 | Extend text to support yielding lines, paragraphs or documents | [] | closed | false | null | 5 | 2021-12-16T07:33:17Z | 2021-12-20T16:59:10Z | 2021-12-20T16:39:18Z | null | Add `config.row` option to `text` module to allow yielding lines (default, current case), paragraphs or documents.
Feel free to comment on the name of the config parameter `row`:
- Currently, the docs state datasets are made of rows and columns
- Other names I considered: `example`, `item` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3442/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3442.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3442",
"merged_at": "2021-12-20T16:39:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3442.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3442"
} | true | [
"The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)",
"> The parameter can also be named `split_by` with values \"line\", \"paragraph\" or \"document\" (no 's' at the end)\r\n\r\n@lhoestq @mariosasko I would avoid the term `split` in this context and... |
https://api.github.com/repos/huggingface/datasets/issues/1401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1401/comments | https://api.github.com/repos/huggingface/datasets/issues/1401/events | https://github.com/huggingface/datasets/pull/1401 | 760,525,949 | MDExOlB1bGxSZXF1ZXN0NTM1MzQyOTY2 | 1,401 | Add reasoning_bg | [] | closed | false | null | 4 | 2020-12-09T17:30:49Z | 2020-12-17T16:50:43Z | 2020-12-17T16:50:42Z | null | Adding reading comprehension dataset for Bulgarian language | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1401/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1401/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1401.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1401",
"merged_at": "2020-12-17T16:50:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1401.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1401"
} | true | [
"Hi @saradhix have you had the chance to reduce the size of the dummy data ?\r\n\r\nFeel free to ping me when it's done so we can merge :) ",
"@lhoestq I have reduced the size of the dummy data manually and pushed the changes.",
"The CI errors are not related to your dataset.\r\nThey're fixed on master, you can... |
https://api.github.com/repos/huggingface/datasets/issues/3089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3089/comments | https://api.github.com/repos/huggingface/datasets/issues/3089/events | https://github.com/huggingface/datasets/issues/3089 | 1,026,973,360 | I_kwDODunzps49Nl6w | 3,089 | JNLPBA Dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-15T01:16:02Z | 2021-10-22T08:23:57Z | 2021-10-22T08:23:57Z | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
The dataset loading script for this dataset is incorrect. This is a biomedical dataset used for named entity recognition. The entities in the [script](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L81-L83) are: O, B, and I. The correct entities from the original data file are:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
## Actual results
The dataset loader script needs to include the following NER names:
['O',
'B-DNA',
'I-DNA',
'B-RNA',
'I-RNA',
'B-cell_line',
'I-cell_line',
'B-cell_type',
'I-cell_type',
'B-protein',
'I-protein']
And the [data](https://github.com/huggingface/datasets/blob/master/datasets/jnlpba/jnlpba.py#L46) that is being pulled has been modified from the original dataset and does not include the original NER tags.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform:
- Python version:
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3089/timeline | null | completed | null | null | false | [
"# Steps to reproduce\r\n\r\nTo reproduce:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('jnlpba')\r\n\r\ndataset['train'].features['ner_tags']\r\n```\r\nOutput:\r\n```python\r\nSequence(feature=ClassLabel(num_classes=3, names=['O', 'B', 'I'], names_file=None, id=None), length=-1, ... |
https://api.github.com/repos/huggingface/datasets/issues/4111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4111/comments | https://api.github.com/repos/huggingface/datasets/issues/4111/events | https://github.com/huggingface/datasets/pull/4111 | 1,194,660,699 | PR_kwDODunzps41vJCt | 4,111 | Update security policy | [] | closed | false | null | 1 | 2022-04-06T13:59:51Z | 2022-04-07T09:46:30Z | 2022-04-07T09:40:27Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4111/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4111.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4111",
"merged_at": "2022-04-07T09:40:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4111.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4111"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/847/comments | https://api.github.com/repos/huggingface/datasets/issues/847/events | https://github.com/huggingface/datasets/issues/847 | 742,179,495 | MDU6SXNzdWU3NDIxNzk0OTU= | 847 | multiprocessing in dataset map "can only test a child process" | [] | closed | false | null | 9 | 2020-11-13T06:01:04Z | 2022-10-05T12:22:51Z | 2022-10-05T12:22:51Z | null | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/multiprocess/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 156, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/fingerprint.py", line 163, in wrapper
out = func(self, *args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 1510, in _map_single
for i in pbar:
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 228, in __iter__
for obj in super(tqdm_notebook, self).__iter__(*args, **kwargs):
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1186, in __iter__
self.close()
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/notebook.py", line 251, in close
super(tqdm_notebook, self).close(*args, **kwargs)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1291, in close
fp_write('')
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/tqdm/std.py", line 1288, in fp_write
self.fp.write(_unicode(s))
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/lib/redirect.py", line 91, in new_write
cb(name, data)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/wandb_run.py", line 598, in _console_callback
self._backend.interface.publish_output(name, data)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 146, in publish_output
self._publish_output(o)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 151, in _publish_output
self._publish(rec)
File "/home/jovyan/share/users/tlaurent/invitae-bert/ve/lib/python3.6/site-packages/wandb/sdk/interface/interface.py", line 431, in _publish
if self._process and not self._process.is_alive():
File "/usr/lib/python3.6/multiprocessing/process.py", line 134, in is_alive
assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
"""
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/847/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/847/timeline | null | completed | null | null | false | [
"It looks like an issue with wandb/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?",
"hi f... |
https://api.github.com/repos/huggingface/datasets/issues/1035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1035/comments | https://api.github.com/repos/huggingface/datasets/issues/1035/events | https://github.com/huggingface/datasets/pull/1035 | 755,947,097 | MDExOlB1bGxSZXF1ZXN0NTMxNTczMTc3 | 1,035 | add wiki_hop | [] | closed | false | null | 1 | 2020-12-03T07:32:26Z | 2020-12-03T16:43:40Z | 2020-12-03T16:41:12Z | null | This PR adds the WikiHop dataset from the QAngaroo multi hop reading comprehension datasets
More info:
http://qangaroo.cs.ucl.ac.uk/index.html
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1035/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1035",
"merged_at": "2020-12-03T16:41:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1035"
} | true | [
"Also the dummy data files are quite big (500KB)\r\nIf you could reduce that that would be nice (just look at the files inside and remove unecessary chunks of texts)\r\nin general dummy data are just a few KB and we suggest to not get higher than 50KB\r\n\r\nHaving light dummy data makes the repo faster to clone"
] |
https://api.github.com/repos/huggingface/datasets/issues/4339 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4339/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4339/comments | https://api.github.com/repos/huggingface/datasets/issues/4339/events | https://github.com/huggingface/datasets/pull/4339 | 1,234,496,289 | PR_kwDODunzps43v0WT | 4,339 | Dataset loader for the MSLR2022 shared task | [] | closed | false | null | 9 | 2022-05-12T21:23:41Z | 2022-07-18T17:19:27Z | 2022-07-18T16:58:34Z | null | This PR adds a dataset loader for the [MSLR2022 Shared Task](https://github.com/allenai/mslr-shared-task). Both the MS^2 and Cochrane datasets can be loaded with this dataloader:
```python
from datasets import load_dataset
ms2 = load_dataset("mslr2022", "ms2")
cochrane = load_dataset("mslr2022", "cochrane")
```
Usage looks like:
```python
>>> ms2 = load_dataset("mslr2022", "ms2", split="validation")
>>> ms2.keys()
dict_keys(['review_id', 'pmid', 'title', 'abstract', 'target', 'background', 'reviews_info'])
>>> ms2[0].target
'Conclusions SC therapy is effective for PAH in pre clinical studies .\nThese results may help to st and ardise pre clinical animal studies and provide a theoretical basis for clinical trial design in the future .'
```
I have tested this works with the following command:
```bash
datasets-cli test datasets/mslr2022 --save_infos --all_configs
```
However I have having a little trouble generating the dummy data
```bash
datasets-cli dummy_data datasets/mslr2022 --auto_generate
```
errors out with the following stack trace:
```
Couldn't generate dummy file 'datasets/mslr2022/dummy/ms2/1.0.0/dummy_data/mslr_data.tar.gz/mslr_data/ms2/convert_to_cochrane.py'. Ignore that if this file is not useful for dummy data.
Traceback (most recent call last):
File "/Users/johngiorgi/.pyenv/versions/datasets/bin/datasets-cli", line 11, in <module>
load_entry_point('datasets', 'console_scripts', 'datasets-cli')()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/datasets_cli.py", line 39, in main
service.run()
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 319, in run
keep_uncompressed=self._keep_uncompressed,
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/commands/dummy_data.py", line 361, in _autogenerate_dummy_data
dataset_builder._prepare_split(split_generator, check_duplicate_keys=False)
File "/Users/johngiorgi/Documents/dev/datasets/src/datasets/builder.py", line 1146, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/Users/johngiorgi/.cache/huggingface/modules/datasets_modules/datasets/mslr2022/b4becd2f52cf18255d4934d7154c2a1127fb393371b87b3c1fc2c8b35a777cea/mslr2022.py", line 149, in _generate_examples
reviews_info_df = pd.read_csv(reviews_info_filepath, index_col=0)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/util/_decorators.py", line 311, in wrapper
return func(*args, **kwargs)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 586, in read_csv
return _read(filepath_or_buffer, kwds)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 488, in _read
return parser.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/readers.py", line 1047, in read
index, columns, col_dict = self._engine.read(nrows)
File "/Users/johngiorgi/.pyenv/versions/3.7.13/envs/datasets/lib/python3.7/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 224, in read
chunks = self._reader.read_low_memory(nrows)
File "pandas/_libs/parsers.pyx", line 801, in pandas._libs.parsers.TextReader.read_low_memory
File "pandas/_libs/parsers.pyx", line 857, in pandas._libs.parsers.TextReader._read_rows
File "pandas/_libs/parsers.pyx", line 843, in pandas._libs.parsers.TextReader._tokenize_rows
File "pandas/_libs/parsers.pyx", line 1925, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: EOF inside string starting at row 2
```
I think this may have to do with unusual line terminators in the original data. When I open it in VSCode, it complains:
```
The file 'dev-inputs.csv' contains one or more unusual line terminator characters, like Line Separator (LS) or Paragraph Separator (PS).
It is recommended to remove them from the file. This can be configured via `editor.unusualLineTerminators`.
```
Tagging the organizers of the shared task in case they want to sanity check this or add any info to the model card :) @lucylw @jayded
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4339/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4339/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4339.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4339",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4339.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4339"
} | true | [
"I think the underlying issue is in https://github.com/huggingface/datasets/blob/c0ed6fdc29675b3565b01b77fde5ab5d9d8b60ec/src/datasets/commands/dummy_data.py#L124 - where `CSV`s are considered to be in the same class of file as text, jsonl, and tsv.\r\n\r\nI think this is an error because CSVs can have newlines wit... |
https://api.github.com/repos/huggingface/datasets/issues/3404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3404/comments | https://api.github.com/repos/huggingface/datasets/issues/3404/events | https://github.com/huggingface/datasets/issues/3404 | 1,073,657,561 | I_kwDODunzps4__rbZ | 3,404 | Optimize ZIP format inference | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-12-07T18:44:49Z | 2021-12-14T17:08:41Z | 2021-12-14T17:08:41Z | null | **Is your feature request related to a problem? Please describe.**
When hundreds of ZIP files are present in a dataset, format inference takes too long.
See: https://github.com/bigscience-workshop/data_tooling/issues/232#issuecomment-986685497
**Describe the solution you'd like**
Iterate over a maximum number of files.
CC: @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3404/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/963/comments | https://api.github.com/repos/huggingface/datasets/issues/963/events | https://github.com/huggingface/datasets/pull/963 | 754,451,234 | MDExOlB1bGxSZXF1ZXN0NTMwMzQ5NjQ4 | 963 | add CODAH dataset | [] | closed | false | null | 0 | 2020-12-01T14:37:05Z | 2020-12-02T13:45:58Z | 2020-12-02T13:21:25Z | null | Adding CODAH dataset.
More info:
https://github.com/Websail-NU/CODAH | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/963/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/963.diff",
"html_url": "https://github.com/huggingface/datasets/pull/963",
"merged_at": "2020-12-02T13:21:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/963.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/963"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/730 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/730/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/730/comments | https://api.github.com/repos/huggingface/datasets/issues/730/events | https://github.com/huggingface/datasets/issues/730 | 721,073,812 | MDU6SXNzdWU3MjEwNzM4MTI= | 730 | Possible caching bug | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2020-10-14T02:02:34Z | 2022-11-22T01:45:54Z | 2020-10-29T09:36:01Z | null | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produces this output:
```
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': 'ð\x9f¤\x97ð\x9f¤\x97ð\x9f¤\x97'}
```
Just changing the order (and deleting the temp files):
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
```
produces this:
```
Using custom data configuration default
Downloading and preparing dataset text/default-15600e4d83254059 (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155...
Dataset text downloaded and prepared to /home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155. Subsequent calls will reuse this data.
{'text': '🤗🤗🤗'}
Using custom data configuration default
Reusing dataset text (/home/arne/.cache/huggingface/datasets/text/default-15600e4d83254059/0.0.0/52cefbb2b82b015d4253f1aeb1e6ee5591124a6491e834acfe1751f765925155)
{'text': '🤗🤗🤗'}
```
Is it intended that the cache path does not depend on the config entries?
tested with datasets==1.1.2 and python==3.8.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/730/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/730/timeline | null | completed | null | null | false | [
"Thanks for reporting. That's a bug indeed.\r\nApparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`)",
"Hi, does this bug be fixed? when I load JSON fi... |
https://api.github.com/repos/huggingface/datasets/issues/2995 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2995/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2995/comments | https://api.github.com/repos/huggingface/datasets/issues/2995/events | https://github.com/huggingface/datasets/pull/2995 | 1,013,143,868 | PR_kwDODunzps4sjThd | 2,995 | Fix trivia_qa unfiltered | [] | closed | false | null | 1 | 2021-10-01T09:53:43Z | 2021-10-01T10:04:11Z | 2021-10-01T10:04:10Z | null | Fix https://github.com/huggingface/datasets/issues/2993 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2995/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2995/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2995.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2995",
"merged_at": "2021-10-01T10:04:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2995.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2995"
} | true | [
"CI fails due to missing tags, but they will be added in https://github.com/huggingface/datasets/pull/2949"
] |
https://api.github.com/repos/huggingface/datasets/issues/3358 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3358/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3358/comments | https://api.github.com/repos/huggingface/datasets/issues/3358/events | https://github.com/huggingface/datasets/issues/3358 | 1,068,623,216 | I_kwDODunzps4_seVw | 3,358 | add new field, and get errors | [] | closed | false | null | 2 | 2021-12-01T16:35:38Z | 2021-12-02T02:26:22Z | 2021-12-02T02:26:22Z | null | after adding new field **tokenized_examples["example_id"]**, and get errors below,
I think it is due to changing data to tensor, and **tokenized_examples["example_id"]** is string list
**all fields**
```
***************** train_dataset 1: Dataset({
features: ['attention_mask', 'end_positions', 'example_id', 'input_ids', 'start_positions', 'token_type_ids'],
num_rows: 87714
})
```
**Errors**
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 705, in convert_to_tensors
tensor = as_tensor(value)
ValueError: too many dimensions 'str'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3358/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3358/timeline | null | completed | null | null | false | [
"Hi, \r\n\r\ncould you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests? ",
"> Hi,\r\n> \r\n> could you please post this question on our [Forum](https://discuss.huggingface.co/) as we keep issues for bugs and feature requests?\r\n\r\nok."
] |
https://api.github.com/repos/huggingface/datasets/issues/2243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2243/comments | https://api.github.com/repos/huggingface/datasets/issues/2243/events | https://github.com/huggingface/datasets/issues/2243 | 862,909,389 | MDU6SXNzdWU4NjI5MDkzODk= | 2,243 | Map is slow and processes batches one after another | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2021-04-20T14:58:20Z | 2021-05-03T17:54:33Z | 2021-05-03T17:54:32Z | null | ## Describe the bug
I have a somewhat unclear bug to me, where I can't figure out what the problem is. The code works as expected on a small subset of my dataset (2000 samples) on my local machine, but when I execute the same code with a larger dataset (1.4 million samples) this problem occurs. Thats why I can't give exact steps to reproduce, I'm sorry.
I process a large dataset in a two step process. I first call map on a dataset I load from disk and create a new dataset from it. This works like expected and `map` uses all workers I started it with. Then I process the dataset created by the first step, again with `map`, which is really slow and starting only one or two process at a time. Number of processes is the same for both steps.
pseudo code:
```python
ds = datasets.load_from_disk("path")
new_dataset = ds.map(work, batched=True, ...) # fast uses all processes
final_dataset = new_dataset.map(work2, batched=True, ...) # slow starts one process after another
```
## Expected results
Second stage should be as fast as the first stage.
## Versions
Paste the output of the following code:
- Datasets: 1.5.0
- Python: 3.8.8 (default, Feb 24 2021, 21:46:12)
- Platform: Linux-5.4.0-60-generic-x86_64-with-glibc2.10
Do you guys have any idea? Thanks a lot! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2243/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2243/timeline | null | completed | null | null | false | [
"Hi @villmow, thanks for reporting.\r\n\r\nCould you please try with the Datasets version 1.6? We released it yesterday and it fixes some issues about the processing speed. You can see the fix implemented by @lhoestq here: #2122.\r\n\r\nOnce you update Datasets, please confirm if the problem persists.",
"Hi @albe... |
https://api.github.com/repos/huggingface/datasets/issues/4956 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4956/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4956/comments | https://api.github.com/repos/huggingface/datasets/issues/4956/events | https://github.com/huggingface/datasets/pull/4956 | 1,366,475,160 | PR_kwDODunzps4-m5NU | 4,956 | Fix TF tests for 2.10 | [] | closed | false | null | 1 | 2022-09-08T14:39:10Z | 2022-09-08T15:16:51Z | 2022-09-08T15:14:44Z | null | Fixes #4953 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4956/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4956/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4956.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4956",
"merged_at": "2022-09-08T15:14:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4956.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4956"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5765/comments | https://api.github.com/repos/huggingface/datasets/issues/5765/events | https://github.com/huggingface/datasets/issues/5765 | 1,671,388,824 | I_kwDODunzps5jn16Y | 5,765 | ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text'] | [] | open | false | null | 4 | 2023-04-17T15:00:50Z | 2023-04-25T13:50:45Z | null | null | ### Describe the bug
Following is my code that I am trying to run, but facing an error (have attached the whole error below):
My code:
```
from collections import OrderedDict
import warnings
import flwr as fl
import torch
import numpy as np
import random
from torch.utils.data import DataLoader
from datasets import load_dataset, load_metric
from transformers import AutoTokenizer, DataCollatorWithPadding
from transformers import AutoModelForSequenceClassification
from transformers import AdamW
#from transformers import tokenized_datasets
warnings.filterwarnings("ignore", category=UserWarning)
# DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DEVICE = "cpu"
CHECKPOINT = "distilbert-base-uncased" # transformer model checkpoint
def load_data():
"""Load IMDB data (training and eval)"""
raw_datasets = load_dataset("yhavinga/imdb_dutch")
raw_datasets = raw_datasets.shuffle(seed=42)
# remove unnecessary data split
del raw_datasets["unsupervised"]
tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT)
def tokenize_function(examples):
return tokenizer(examples["text"], truncation=True)
# random 100 samples
population = random.sample(range(len(raw_datasets["train"])), 100)
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
tokenized_datasets["train"] = tokenized_datasets["train"].select(population)
tokenized_datasets["test"] = tokenized_datasets["test"].select(population)
# tokenized_datasets = tokenized_datasets.remove_columns("text")
# tokenized_datasets = tokenized_datasets.rename_column("label", "labels")
tokenized_datasets = tokenized_datasets.remove_columns("attention_mask")
tokenized_datasets = tokenized_datasets.remove_columns("input_ids")
tokenized_datasets = tokenized_datasets.remove_columns("label")
tokenized_datasets = tokenized_datasets.remove_columns("text_en")
# tokenized_datasets = tokenized_datasets.remove_columns(raw_datasets["train"].column_names)
data_collator = DataCollatorWithPadding(tokenizer=tokenizer)
trainloader = DataLoader(
tokenized_datasets["train"],
shuffle=True,
batch_size=32,
collate_fn=data_collator,
)
testloader = DataLoader(
tokenized_datasets["test"], batch_size=32, collate_fn=data_collator
)
return trainloader, testloader
def train(net, trainloader, epochs):
optimizer = AdamW(net.parameters(), lr=5e-4)
net.train()
for _ in range(epochs):
for batch in trainloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
outputs = net(**batch)
loss = outputs.loss
loss.backward()
optimizer.step()
optimizer.zero_grad()
def test(net, testloader):
metric = load_metric("accuracy")
loss = 0
net.eval()
for batch in testloader:
batch = {k: v.to(DEVICE) for k, v in batch.items()}
with torch.no_grad():
outputs = net(**batch)
logits = outputs.logits
loss += outputs.loss.item()
predictions = torch.argmax(logits, dim=-1)
metric.add_batch(predictions=predictions, references=batch["labels"])
loss /= len(testloader.dataset)
accuracy = metric.compute()["accuracy"]
return loss, accuracy
def main():
net = AutoModelForSequenceClassification.from_pretrained(
CHECKPOINT, num_labels=2
).to(DEVICE)
trainloader, testloader = load_data()
# Flower client
class IMDBClient(fl.client.NumPyClient):
def get_parameters(self, config):
return [val.cpu().numpy() for _, val in net.state_dict().items()]
def set_parameters(self, parameters):
params_dict = zip(net.state_dict().keys(), parameters)
state_dict = OrderedDict({k: torch.Tensor(v) for k, v in params_dict})
net.load_state_dict(state_dict, strict=True)
def fit(self, parameters, config):
self.set_parameters(parameters)
print("Training Started...")
train(net, trainloader, epochs=1)
print("Training Finished.")
return self.get_parameters(config={}), len(trainloader), {}
def evaluate(self, parameters, config):
self.set_parameters(parameters)
loss, accuracy = test(net, testloader)
return float(loss), len(testloader), {"accuracy": float(accuracy)}
# Start client
fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient())
if __name__ == "__main__":
main()
```
Error:
```
Traceback (most recent call last):
File "client_2.py", line 136, in <module>
main()
File "client_2.py", line 132, in main
fl.client.start_numpy_client(server_address="localhost:8080", client=IMDBClient())
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 208, in start_numpy_client
start_client(
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 142, in start_client
client_message, sleep_duration, keep_going = handle(
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 68, in handle
return _fit(client, server_msg.fit_ins), 0, True
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/grpc_client/message_handler.py", line 157, in _fit
fit_res = client.fit(fit_ins)
File "/home/saurav/.local/lib/python3.8/site-packages/flwr/client/app.py", line 252, in _fit
results = self.numpy_client.fit(parameters, ins.config) # type: ignore
File "client_2.py", line 122, in fit
train(net, trainloader, epochs=1)
File "client_2.py", line 76, in train
for batch in trainloader:
File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in __next__
data = self._next_data()
File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/saurav/.local/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/saurav/.local/lib/python3.8/site-packages/transformers/data/data_collator.py", line 221, in __call__
batch = self.tokenizer.pad(
File "/home/saurav/.local/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2713, in pad
raise ValueError(
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['text']
```
### Steps to reproduce the bug
Run the above code.
### Expected behavior
Don't know, doing it for the first time.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5765/timeline | null | null | null | null | false | [
"You need to remove the `text` and `text_en` columns before passing the dataset to the `DataLoader` to avoid this error:\r\n```python\r\ntokenized_datasets = tokenized_datasets.remove_columns([\"text\", \"text_en\"])\r\n```\r\n",
"Thanks @mariosasko. Now I am getting this error:\r\n\r\n```\r\nTraceback (most rece... |
https://api.github.com/repos/huggingface/datasets/issues/5605 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5605/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5605/comments | https://api.github.com/repos/huggingface/datasets/issues/5605/events | https://github.com/huggingface/datasets/pull/5605 | 1,608,865,460 | PR_kwDODunzps5LPPf5 | 5,605 | Update README logo | [] | closed | false | null | 3 | 2023-03-03T15:46:31Z | 2023-03-03T21:57:18Z | 2023-03-03T21:50:17Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5605/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5605/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5605.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5605",
"merged_at": "2023-03-03T21:50:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5605.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5605"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Are you sure it's safe to remove? https://github.com/huggingface/datasets/pull/3866",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benc... |
https://api.github.com/repos/huggingface/datasets/issues/1389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1389/comments | https://api.github.com/repos/huggingface/datasets/issues/1389/events | https://github.com/huggingface/datasets/pull/1389 | 760,402,224 | MDExOlB1bGxSZXF1ZXN0NTM1MjM5OTYy | 1,389 | add amazon polarity dataset | [] | closed | false | null | 5 | 2020-12-09T14:58:21Z | 2020-12-11T11:45:39Z | 2020-12-11T11:41:01Z | null | This corresponds to the amazon (binary dataset) requested in https://github.com/huggingface/datasets/issues/353 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1389/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1389",
"merged_at": "2020-12-11T11:41:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1389"
} | true | [
"`amazon_polarity` is probably a subset of `amazon_us_reviews` but I am not entirely sure about that.\r\nI guess `amazon_polarity` will help in reproducing results of papers using this dataset since even if it is a subset from `amazon_us_reviews`, it is not trivial how to extract `amazon_polarity` from `amazon_us_r... |
https://api.github.com/repos/huggingface/datasets/issues/5490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5490/comments | https://api.github.com/repos/huggingface/datasets/issues/5490/events | https://github.com/huggingface/datasets/pull/5490 | 1,565,842,327 | PR_kwDODunzps5I_nz- | 5,490 | Do not add index column by default when exporting to CSV | [] | closed | false | null | 2 | 2023-02-01T10:20:55Z | 2023-02-09T09:29:08Z | 2023-02-09T09:22:23Z | null | As pointed out by @merveenoyan, default behavior of `Dataset.to_csv` adds the index as an additional column without name.
This PR changes the default behavior, so that now the index column is not written.
To add the index column, now you need to pass `index=True` and also `index_label=<name of the index colum>` to name that column.
CC: @merveenoyan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5490/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5490.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5490",
"merged_at": "2023-02-09T09:22:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5490.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5490"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5406/comments | https://api.github.com/repos/huggingface/datasets/issues/5406/events | https://github.com/huggingface/datasets/issues/5406 | 1,519,140,544 | I_kwDODunzps5ajD7A | 5,406 | [2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | [] | open | false | null | 11 | 2023-01-04T15:10:04Z | 2023-06-21T18:45:38Z | null | null | `datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets.
When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets:
```python
TypeError: can only concatenate str (not "int") to str
```
This is because we started to update the metadata of those datasets to a format that is not supported in 2.6.1 and 2.7.0
This change is required or those datasets won't be supported by the Hugging Face Hub.
Therefore if you encounter this error or if you're using `datasets` 2.6.1 or 2.7.0, we encourage you to update to a newer version.
For example, versions 2.6.2 and 2.7.1 patch this issue.
```python
pip install -U datasets
```
All the datasets affected are the ones with a ClassLabel feature type and YAML "dataset_info" metadata. More info [here](https://github.com/huggingface/datasets/issues/5275).
We apologize for the inconvenience. | {
"+1": 10,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5406/timeline | null | null | null | null | false | [
"I still get this error on 2.9.0\r\n<img width=\"1925\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7208470/215597359-2f253c76-c472-4612-8099-d3a74d16eb29.png\">\r\n",
"Hi ! I just tested locally and or colab and it works fine for 2.9 on `sst2`.\r\n\r\nAlso the code that is shown in your stack t... |
https://api.github.com/repos/huggingface/datasets/issues/824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/824/comments | https://api.github.com/repos/huggingface/datasets/issues/824/events | https://github.com/huggingface/datasets/issues/824 | 739,896,526 | MDU6SXNzdWU3Mzk4OTY1MjY= | 824 | Discussion using datasets in offline mode | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | closed | false | null | 8 | 2020-11-10T13:10:51Z | 2022-02-15T10:32:36Z | 2022-02-15T10:32:36Z | null | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some points to open discussion:
- if you want to prepare your code/datasets on your machine (having internet connexion) but run it on another offline machine (not having internet connexion), it won't work as is, even if you have all files locally on this machine.
- AFAIK, you can make it work if you manually put the python files (csv.py for example) on this offline machine and change your code to `datasets.load_dataset("MY_PATH/csv.py", ...)`. But it would be much better if you could run ths same code without modification if files are available locally.
- I've also been considering the requirement of downloading Python code and execute on your machine to use datasets. This can be an issue in a professional context. Downloading a CSV/H5 file is acceptable, downloading an executable script can open many security issues. We certainly need a mechanism to at least "freeze" the dataset code you retrieved once so that you can review it if you want and then be sure you use this one everywhere and not a version dowloaded from internet.
WDYT? (thks)
| {
"+1": 8,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/824/timeline | null | completed | null | null | false | [
"No comments ?",
"I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the ... |
https://api.github.com/repos/huggingface/datasets/issues/1812 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1812/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1812/comments | https://api.github.com/repos/huggingface/datasets/issues/1812/events | https://github.com/huggingface/datasets/pull/1812 | 799,379,178 | MDExOlB1bGxSZXF1ZXN0NTY2MDMxODIy | 1,812 | Add CIFAR-100 Dataset | [] | closed | false | null | 2 | 2021-02-02T15:22:59Z | 2021-02-08T11:10:18Z | 2021-02-08T10:39:06Z | null | Adding CIFAR-100 Dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1812/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1812/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1812.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1812",
"merged_at": "2021-02-08T10:39:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1812.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1812"
} | true | [
"Hi @lhoestq,\r\nI have updated with the changes from the review.",
"Thanks for approving :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4135/comments | https://api.github.com/repos/huggingface/datasets/issues/4135/events | https://github.com/huggingface/datasets/pull/4135 | 1,198,307,610 | PR_kwDODunzps416-Rn | 4,135 | Support streaming xtreme dataset for PAN-X config | [] | closed | false | null | 1 | 2022-04-09T06:19:48Z | 2022-05-06T08:39:40Z | 2022-04-11T06:54:14Z | null | Support streaming xtreme dataset for PAN-X config. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4135/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4135.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4135",
"merged_at": "2022-04-11T06:54:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4135.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4135"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1315 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1315/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1315/comments | https://api.github.com/repos/huggingface/datasets/issues/1315/events | https://github.com/huggingface/datasets/pull/1315 | 759,548,706 | MDExOlB1bGxSZXF1ZXN0NTM0NTM1NjM4 | 1,315 | add yelp_review_full | [] | closed | false | null | 0 | 2020-12-08T15:38:27Z | 2020-12-09T15:55:49Z | 2020-12-09T15:55:49Z | null | This corresponds to the Yelp-5 requested in https://github.com/huggingface/datasets/issues/353
I included the dataset card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1315/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1315/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1315.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1315",
"merged_at": "2020-12-09T15:55:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1315.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1315"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2397/comments | https://api.github.com/repos/huggingface/datasets/issues/2397/events | https://github.com/huggingface/datasets/pull/2397 | 899,427,378 | MDExOlB1bGxSZXF1ZXN0NjUxMTMxMTY0 | 2,397 | Fix number of classes in indic_glue sna.bn dataset | [] | closed | false | null | 2 | 2021-05-24T08:18:55Z | 2021-05-25T16:32:16Z | 2021-05-25T16:32:16Z | null | As read in the [paper](https://www.aclweb.org/anthology/2020.findings-emnlp.445.pdf), Table 11. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 1,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2397/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2397/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2397.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2397",
"merged_at": "2021-05-25T16:32:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2397.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2397"
} | true | [
"@lhoestq there are many things missing in the README.md file, but this correction is right despite not passing the validation tests...",
"Yes indeed. We run the validation in all modified readme because we think that it is the time when contributors are the most likely to fix a dataset card - or it will never be... |
https://api.github.com/repos/huggingface/datasets/issues/3541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3541/comments | https://api.github.com/repos/huggingface/datasets/issues/3541/events | https://github.com/huggingface/datasets/issues/3541 | 1,095,033,828 | I_kwDODunzps5BROPk | 3,541 | Support 7-zip compressed data files | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2022-01-06T07:11:03Z | 2022-07-19T10:18:30Z | null | null | **Is your feature request related to a problem? Please describe.**
We should support 7-zip compressed data files:
- [x] in `extract`:
- #4672
- [ ] in `iter_archive`: for streaming mode
both in streaming and non-streaming modes.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3541/timeline | null | null | null | null | false | [
"This should also resolve: https://github.com/huggingface/datasets/issues/3185."
] |
https://api.github.com/repos/huggingface/datasets/issues/2130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2130/comments | https://api.github.com/repos/huggingface/datasets/issues/2130/events | https://github.com/huggingface/datasets/issues/2130 | 843,111,936 | MDU6SXNzdWU4NDMxMTE5MzY= | 2,130 | wikiann dataset is missing columns | [
{
"color": "7057ff",
"default": true,
"description": "Good for newcomers",
"id": 1935892877,
"name": "good first issue",
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue"
}
] | closed | false | null | 5 | 2021-03-29T08:23:00Z | 2021-08-27T14:44:18Z | 2021-08-27T14:44:18Z | null | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2130/timeline | null | completed | null | null | false | [
"Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ",
"Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined ... |
https://api.github.com/repos/huggingface/datasets/issues/273 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/273/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/273/comments | https://api.github.com/repos/huggingface/datasets/issues/273/events | https://github.com/huggingface/datasets/pull/273 | 638,968,054 | MDExOlB1bGxSZXF1ZXN0NDM0NjM0MzU4 | 273 | update cos_e to add cos_e v1.0 | [] | closed | false | null | 0 | 2020-06-15T16:03:22Z | 2020-06-16T08:25:54Z | 2020-06-16T08:25:52Z | null | This PR updates the cos_e dataset to add v1.0 as requested here #163
@nazneenrajani | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/273/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/273/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/273.diff",
"html_url": "https://github.com/huggingface/datasets/pull/273",
"merged_at": "2020-06-16T08:25:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/273.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/273"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5922/comments | https://api.github.com/repos/huggingface/datasets/issues/5922/events | https://github.com/huggingface/datasets/issues/5922 | 1,736,898,953 | I_kwDODunzps5nhvmJ | 5,922 | Length of table does not accurately reflect the split | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | 2 | 2023-06-01T18:56:26Z | 2023-06-02T16:13:31Z | 2023-06-02T16:13:31Z | null | ### Describe the bug
I load a Huggingface Dataset and do `train_test_split`. I'm expecting the underlying table for the dataset to also be split, but it's not.
### Steps to reproduce the bug

### Expected behavior
The expected behavior is when `len(hf_dataset["train"].data)` should match the length of the train split, and not be the entire unsplit dataset.
### Environment info
datasets 2.10.1
python 3.10.11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5922/timeline | null | completed | null | null | false | [
"As already replied by @lhoestq (private channel):\r\n> `.train_test_split` (as well as `.shard`, `.select`) doesn't create a new arrow table to save time and disk space. Instead, it uses an indices mapping on top of the table that locate which examples are part of train or test.",
"This is an optimization that w... |
https://api.github.com/repos/huggingface/datasets/issues/4828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4828/comments | https://api.github.com/repos/huggingface/datasets/issues/4828/events | https://github.com/huggingface/datasets/pull/4828 | 1,336,040,168 | PR_kwDODunzps49B_vb | 4,828 | Support PIL Image objects in `add_item`/`add_column` | [] | open | false | null | 2 | 2022-08-11T14:25:45Z | 2023-02-23T14:01:47Z | null | null | Fix #4796
PS: We should also improve the type inference in `OptimizedTypeSequence` to make it possible to also infer the complex types (only `Image` currently) in nested arrays (e.g. `[[pil_image], [pil_image, pil_image]]` or `[{"img": pil_image}`]), but I plan to address this in a separate PR. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4828/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4828"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4828). All of your documentation changes will be reflected on that endpoint.",
"Hey @mariosasko could we please merge this? I'm still getting the original error at #4796 ."
] |
https://api.github.com/repos/huggingface/datasets/issues/1204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1204/comments | https://api.github.com/repos/huggingface/datasets/issues/1204/events | https://github.com/huggingface/datasets/pull/1204 | 757,939,475 | MDExOlB1bGxSZXF1ZXN0NTMzMjA2MzE3 | 1,204 | adding meta_woz dataset | [] | closed | false | null | 0 | 2020-12-06T14:34:13Z | 2020-12-16T15:05:25Z | 2020-12-16T15:05:24Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1204/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1204/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1204.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1204",
"merged_at": "2020-12-16T15:05:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1204.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1204"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/3907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3907/comments | https://api.github.com/repos/huggingface/datasets/issues/3907/events | https://github.com/huggingface/datasets/pull/3907 | 1,168,575,998 | PR_kwDODunzps40Z_vd | 3,907 | Update README.md for SQuAD metric | [] | closed | false | null | 1 | 2022-03-14T15:52:31Z | 2022-03-15T17:04:20Z | 2022-03-15T17:04:19Z | null | Putting "Values from popular papers" as a subsection of "Output values" | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3907/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3907/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3907.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3907",
"merged_at": "2022-03-15T17:04:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3907.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3907"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3907). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/1862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1862/comments | https://api.github.com/repos/huggingface/datasets/issues/1862/events | https://github.com/huggingface/datasets/pull/1862 | 805,722,293 | MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx | 1,862 | Fix writing GPU Faiss index | [] | closed | false | null | 0 | 2021-02-10T17:32:03Z | 2021-02-10T18:17:48Z | 2021-02-10T18:17:47Z | null | As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU.
I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu`
Close #1859 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1862/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1862/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1862.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1862",
"merged_at": "2021-02-10T18:17:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1862.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1862"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5247/comments | https://api.github.com/repos/huggingface/datasets/issues/5247/events | https://github.com/huggingface/datasets/pull/5247 | 1,451,297,749 | PR_kwDODunzps5DAhto | 5,247 | Set dev version | [] | closed | false | null | 1 | 2022-11-16T10:17:31Z | 2022-11-16T10:22:20Z | 2022-11-16T10:17:50Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5247/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5247",
"merged_at": "2022-11-16T10:17:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5247"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5247). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/5681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5681/comments | https://api.github.com/repos/huggingface/datasets/issues/5681/events | https://github.com/huggingface/datasets/issues/5681 | 1,645,630,784 | I_kwDODunzps5iFlVA | 5,681 | Add information about patterns search order to the doc about structuring repo | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 2 | 2023-03-29T11:44:49Z | 2023-04-03T18:31:11Z | 2023-04-03T18:31:11Z | null | Following [this](https://github.com/huggingface/datasets/issues/5650) issue I think we should add a note about the order of patterns that is used to find splits, see [my comment](https://github.com/huggingface/datasets/issues/5650#issuecomment-1488412527). Also we should reference this page in pages about packaged loaders.
I have a déjà vu that it had already been discussed as some point but I don't remember.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5681/timeline | null | completed | null | null | false | [
"Good idea, I think I've seen this a couple of times before too on the forums. I can work on this :)",
"Closed in #5693 "
] |
https://api.github.com/repos/huggingface/datasets/issues/2495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2495/comments | https://api.github.com/repos/huggingface/datasets/issues/2495/events | https://github.com/huggingface/datasets/issues/2495 | 920,170,030 | MDU6SXNzdWU5MjAxNzAwMzA= | 2,495 | JAX formatting | [] | closed | false | null | 0 | 2021-06-14T08:32:07Z | 2021-06-21T16:15:49Z | 2021-06-21T16:15:49Z | null | We already support pytorch, tensorflow, numpy, pandas and arrow dataset formatting. Let's add jax as well | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2495/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5722/comments | https://api.github.com/repos/huggingface/datasets/issues/5722/events | https://github.com/huggingface/datasets/issues/5722 | 1,659,837,510 | I_kwDODunzps5i7xxG | 5,722 | Distributed Training Error on Customized Dataset | [] | closed | false | null | 1 | 2023-04-09T11:04:59Z | 2023-07-24T14:50:46Z | 2023-07-24T14:50:46Z | null | Hi guys, recently I tried to use `datasets` to train a dual encoder.
I finish my own datasets according to the nice [tutorial](https://huggingface.co/docs/datasets/v2.11.0/en/dataset_script)
Here are my code:
```python
class RetrivalDataset(datasets.GeneratorBasedBuilder):
"""CrossEncoder dataset."""
BUILDER_CONFIGS = [RetrivalConfig(name="DuReader")]
# DEFAULT_CONFIG_NAME = "DuReader"
def _info(self):
return datasets.DatasetInfo(
features=datasets.Features(
{
"id": datasets.Value("string"),
"question": datasets.Value("string"),
"documents": Sequence(datasets.Value("string")),
}
),
supervised_keys=None,
)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
train_file = self.config.data_dir + self.config.train_file
valid_file = self.config.data_dir + self.config.valid_file
logger.info(f"Training on {self.config.train_file}")
logger.info(f"Evaluating on {self.config.valid_file}")
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN, gen_kwargs={"file_path": train_file}
),
datasets.SplitGenerator(
name=datasets.Split.VALIDATION, gen_kwargs={"file_path": valid_file}
),
]
def _generate_examples(self, file_path):
with jsonlines.open(file_path, "r") as f:
for record in f:
label = record["label"]
question = record["question"]
# dual encoder
all_documents = record["all_documents"]
positive_paragraph = all_documents.pop(label)
all_documents = [positive_paragraph] + all_documents
u_id = "{}_#_{}".format(
md5_hash(question + "".join(all_documents)),
"".join(random.sample(string.ascii_letters + string.digits, 7)),
)
item = {
"question": question,
"documents": all_documents,
"id": u_id,
}
yield u_id, item
```
It works well on single GPU, but got errors as follows when used DDP:
```python
Detected mismatch between collectives on ranks. Rank 1 is running collective: CollectiveFingerPrint(OpType=BARRIER), but Rank 0 is running collective: CollectiveFingerPrint(OpType=ALLGATHER_COALESCED)
```
Here are my train script on a two A100 mechine:
```bash
export TORCH_DISTRIBUTED_DEBUG=DETAIL
export TORCH_SHOW_CPP_STACKTRACES=1
export NCCL_DEBUG=INFO
export NCCL_DEBUG_SUBSYS=INIT,COLL,ENV
nohup torchrun --nproc_per_node 2 train.py experiments/de-big.json >logs/de-big.log 2>&1&
```
I am not sure if this error below related to my dataset code when use DDP. And I notice the PR(#5369 ), but I don't know when and where should I used the function(`split_dataset_by_node`) .
@lhoestq hope you could help me?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5722/timeline | null | completed | null | null | false | [
"Hmm the error doesn't seem related to data loading.\r\n\r\nRegarding `split_dataset_by_node`: it's generally used to split an iterable dataset (e.g. when streaming) in pytorch DDP. It's not needed if you use a regular dataset since the pytorch DataLoader already assigns a subset of the dataset indices to each node... |
https://api.github.com/repos/huggingface/datasets/issues/2887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2887/comments | https://api.github.com/repos/huggingface/datasets/issues/2887/events | https://github.com/huggingface/datasets/pull/2887 | 992,576,305 | MDExOlB1bGxSZXF1ZXN0NzMwODg4MTU3 | 2,887 | #2837 Use cache folder for lockfile | [] | closed | false | null | 1 | 2021-09-09T19:55:56Z | 2021-10-05T17:58:22Z | 2021-10-05T17:58:22Z | null | Fixes #2837
Use a cache folder directory to store the FileLock.
The issue was that the lock file was in a readonly folder.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2887/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2887/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2887.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2887",
"merged_at": "2021-10-05T17:58:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2887.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2887"
} | true | [
"The CI fail about the meteor metric is unrelated to this PR "
] |
https://api.github.com/repos/huggingface/datasets/issues/1798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1798/comments | https://api.github.com/repos/huggingface/datasets/issues/1798/events | https://github.com/huggingface/datasets/pull/1798 | 797,766,818 | MDExOlB1bGxSZXF1ZXN0NTY0Njk2NjE1 | 1,798 | Add Arabic sarcasm dataset | [] | closed | false | null | 1 | 2021-01-31T17:38:55Z | 2021-02-10T20:39:13Z | 2021-02-03T10:35:54Z | null | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1798/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"merged_at": "2021-02-03T10:35:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1798"
} | true | [
"@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data"
] |
https://api.github.com/repos/huggingface/datasets/issues/3495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3495/comments | https://api.github.com/repos/huggingface/datasets/issues/3495/events | https://github.com/huggingface/datasets/issues/3495 | 1,089,983,632 | I_kwDODunzps5A99SQ | 3,495 | Add VoxLingua107 | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2021-12-28T15:51:43Z | 2021-12-28T15:51:43Z | null | null | ## Adding a Dataset
- **Name:** VoxLingua107
- **Description:** VoxLingua107 is a speech dataset for training spoken language identification models.
- **Paper:** https://arxiv.org/abs/2011.12998
- **Data:** http://bark.phon.ioc.ee/voxlingua107/
- **Motivation:** 107 languages, totaling 6628 hours for the train split.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3495/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3495/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2451 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2451/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2451/comments | https://api.github.com/repos/huggingface/datasets/issues/2451/events | https://github.com/huggingface/datasets/pull/2451 | 913,263,340 | MDExOlB1bGxSZXF1ZXN0NjYzMzIwNDY1 | 2,451 | Mention that there are no answers in adversarial_qa test set | [] | closed | false | null | 0 | 2021-06-07T08:13:57Z | 2021-06-07T08:34:14Z | 2021-06-07T08:34:13Z | null | As mention in issue https://github.com/huggingface/datasets/issues/2447, there are no answers in the test set | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2451/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2451/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2451.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2451",
"merged_at": "2021-06-07T08:34:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2451.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2451"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/245/comments | https://api.github.com/repos/huggingface/datasets/issues/245/events | https://github.com/huggingface/datasets/issues/245 | 631,985,108 | MDU6SXNzdWU2MzE5ODUxMDg= | 245 | SST-2 test labels are all -1 | [] | closed | false | null | 10 | 2020-06-05T21:41:42Z | 2021-12-08T00:47:32Z | 2020-06-06T16:56:41Z | null | I'm trying to test a model on the SST-2 task, but all the labels I see in the test set are -1.
```
>>> import nlp
>>> glue = nlp.load_dataset('glue', 'sst2')
>>> glue
{'train': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 67349), 'validation': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 872), 'test': Dataset(schema: {'sentence': 'string', 'label': 'int64', 'idx': 'int32'}, num_rows: 1821)}
>>> list(l['label'] for l in glue['test'])
[-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1]
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/245/timeline | null | completed | null | null | false | [
"this also happened to me with `nlp.load_dataset('glue', 'mnli')`",
"Yes, this is because the test sets for glue are hidden so the labels are\nnot publicly available. You can read the glue paper for more details.\n\nOn Sat, 6 Jun 2020 at 18:16, Jack Morris <notifications@github.com> wrote:\n\n> this also happened... |
https://api.github.com/repos/huggingface/datasets/issues/5724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5724/comments | https://api.github.com/repos/huggingface/datasets/issues/5724/events | https://github.com/huggingface/datasets/issues/5724 | 1,659,938,135 | I_kwDODunzps5i8KVX | 5,724 | Error after shuffling streaming IterableDatasets with downloaded dataset | [] | closed | false | null | 1 | 2023-04-09T16:58:44Z | 2023-04-20T20:37:30Z | 2023-04-20T20:37:30Z | null | ### Describe the bug
I downloaded the C4 dataset, and used streaming IterableDatasets to read it. Everything went normal until I used `dataset = dataset.shuffle(seed=42, buffer_size=10_000)` to shuffle the dataset. Shuffled dataset will throw the following error when it is used by `next(iter(dataset))`:
```
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 937, in __iter__
for key, example in ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 627, in __iter__
for x in self.ex_iterable:
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 138, in __iter__
yield from self.generate_examples_fn(**kwargs_with_shuffled_shards)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 763, in wrapper
for key, table in generate_tables_fn(**kwargs):
File "/data/miniconda3/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 101, in _generate_tables
batch = f.read(self.config.chunksize)
File "/data/miniconda3/lib/python3.9/site-packages/datasets/download/streaming_download_manager.py", line 372, in read_with_retries
out = read(*args, **kwargs)
File "/data/miniconda3/lib/python3.9/gzip.py", line 300, in read
return self._buffer.read(size)
File "/data/miniconda3/lib/python3.9/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/data/miniconda3/lib/python3.9/gzip.py", line 487, in read
if not self._read_gzip_header():
File "/data/miniconda3/lib/python3.9/gzip.py", line 435, in _read_gzip_header
raise BadGzipFile('Not a gzipped file (%r)' % magic)
gzip.BadGzipFile: Not a gzipped file (b've')
```
I found that there is no problem to use the dataset in this way without shuffling. Also, use `dataset = datasets.load_dataset('c4', 'en', split='train', streaming=True)`, which will download the dataset on-the-fly instead of loading from the local file, will also not have problems even after shuffle.
### Steps to reproduce the bug
1. Download C4 dataset from https://huggingface.co/datasets/allenai/c4
2.
```
import datasets
dataset = datasets.load_dataset('/path/to/your/data/dir', 'en', streaming=True, split='train')
dataset = dataset.shuffle(buffer_size=10_000, seed=42)
next(iter(dataset))
```
### Expected behavior
`next(iter(dataset))` should give me a sample from the dataset
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.32-1-tlinux4-0001-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.13.1
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5724/timeline | null | completed | null | null | false | [
"Moving `\"en\"` to the end of the path instead of passing it as a config name should fix the error:\r\n```python\r\nimport datasets\r\ndataset = datasets.load_dataset('/path/to/your/data/dir/en', streaming=True, split='train')\r\ndataset = dataset.shuffle(buffer_size=10_000, seed=42)\r\nnext(iter(dataset))\r\n```\... |
https://api.github.com/repos/huggingface/datasets/issues/5809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5809/comments | https://api.github.com/repos/huggingface/datasets/issues/5809/events | https://github.com/huggingface/datasets/issues/5809 | 1,689,797,293 | I_kwDODunzps5kuEKt | 5,809 | wiki_dpr details for Open Domain Question Answering tasks | [] | closed | false | null | 1 | 2023-04-30T06:12:04Z | 2023-07-21T14:11:00Z | 2023-07-21T14:11:00Z | null | Hey guys!
Thanks for creating the wiki_dpr dataset!
I am currently trying to combine wiki_dpr and my own datasets. but I don't know how to make the embedding value the same way as wiki_dpr.
As an experiment, I embeds the text of id="7" of wiki_dpr, but this result was very different from wiki_dpr. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5809/timeline | null | completed | null | null | false | [
"Hi ! I don't remember exactly how it was done, but maybe you have to embed `f\"{title}<sep>{text}\"` ?\r\n\r\nUsing a HF tokenizer it corresponds to doing\r\n```python\r\ntokenized = tokenizer(titles, texts)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/2439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2439/comments | https://api.github.com/repos/huggingface/datasets/issues/2439/events | https://github.com/huggingface/datasets/pull/2439 | 908,511,983 | MDExOlB1bGxSZXF1ZXN0NjU5MTkzMDE3 | 2,439 | Better error message when trying to access elements of a DatasetDict without specifying the split | [] | closed | false | null | 0 | 2021-06-01T17:04:32Z | 2021-06-15T16:03:23Z | 2021-06-07T08:54:35Z | null | As mentioned in #2437 it'd be nice to to have an indication to the users when they try to access an element of a DatasetDict without specifying the split name.
cc @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2439/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2439.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2439",
"merged_at": "2021-06-07T08:54:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2439.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2439"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2238/comments | https://api.github.com/repos/huggingface/datasets/issues/2238/events | https://github.com/huggingface/datasets/pull/2238 | 861,518,291 | MDExOlB1bGxSZXF1ZXN0NjE4MTY5NzM5 | 2,238 | NLU evaluation data | [] | closed | false | null | 0 | 2021-04-19T16:47:20Z | 2021-04-23T15:32:05Z | 2021-04-23T15:32:05Z | null | New intent classification dataset from https://github.com/xliuhw/NLU-Evaluation-Data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2238/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2238.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2238",
"merged_at": "2021-04-23T15:32:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2238.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2238"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4268/comments | https://api.github.com/repos/huggingface/datasets/issues/4268/events | https://github.com/huggingface/datasets/issues/4268 | 1,223,331,964 | I_kwDODunzps5I6pB8 | 4,268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 10 | 2022-05-02T20:34:25Z | 2022-05-06T15:53:30Z | 2022-05-03T11:23:48Z | null | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results.
## Actual results
```
ExpectedMoreDownloadedFiles Traceback (most recent call last)
[<ipython-input-62-4ac5cf959477>](https://localhost:8080/#) in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
3 frames
[/usr/local/lib/python3.7/dist-packages/datasets/utils/info_utils.py](https://localhost:8080/#) in verify_checksums(expected_checksums, recorded_checksums, verification_name)
31 return
32 if len(set(expected_checksums) - set(recorded_checksums)) > 0:
---> 33 raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
34 if len(set(recorded_checksums) - set(expected_checksums)) > 0:
35 raise UnexpectedDownloadedFile(str(set(recorded_checksums) - set(expected_checksums)))
ExpectedMoreDownloadedFiles: {'/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz', '/home/leandro/catalogue_data/datasets/lm_en_wiktionary_filtered/data/file-01.jsonl.gz.lock'}
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4268/timeline | null | completed | null | null | false | [
"It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for [\"word\"](https://en.wiktionary.org/wiki/word),\r\n\r\nPronunciation\r\n([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wi... |
https://api.github.com/repos/huggingface/datasets/issues/214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/214/comments | https://api.github.com/repos/huggingface/datasets/issues/214/events | https://github.com/huggingface/datasets/pull/214 | 626,641,549 | MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx | 214 | [arrow_dataset.py] add new filter function | [] | closed | false | null | 13 | 2020-05-28T16:21:40Z | 2020-05-29T11:43:29Z | 2020-05-29T11:32:20Z | null | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a sample code you can play around with:
```python
ds = nlp.load_dataset("squad", split="validation[:10%]")
def remove_under_idx_5(example, idx):
return idx < 5
def only_keep_examples_with_is_in_context(example):
return "is" in example["context"]
result_keep_only_first_5 = ds.filter(remove_under_idx_5, with_indices=True, load_from_cache_file=False)
result_keep_examples_with_is_in_context = ds.filter(only_keep_examples_with_is_in_context, load_from_cache_file=False)
print("Original number of examples: {}".format(len(ds)))
print("First five examples number of examples: {}".format(len(result_keep_only_first_5)))
print("Is in context examples number of examples: {}".format(len(result_keep_examples_with_is_in_context)))
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/214/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/214",
"merged_at": "2020-05-29T11:32:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/214"
} | true | [
"I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.",
... |
https://api.github.com/repos/huggingface/datasets/issues/4223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4223/comments | https://api.github.com/repos/huggingface/datasets/issues/4223/events | https://github.com/huggingface/datasets/pull/4223 | 1,216,107,082 | PR_kwDODunzps42z0YV | 4,223 | Add Accuracy Metric Card | [] | closed | false | null | 1 | 2022-04-26T15:10:46Z | 2022-05-03T14:27:45Z | 2022-05-03T14:20:47Z | null | - adds accuracy metric card
- updates docstring in accuracy.py
- adds .json file with metric card and docstring information | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4223/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4223",
"merged_at": "2022-05-03T14:20:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4223"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1962 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1962/comments | https://api.github.com/repos/huggingface/datasets/issues/1962/events | https://github.com/huggingface/datasets/pull/1962 | 818,089,156 | MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4 | 1,962 | Fix unused arguments | [] | closed | false | null | 3 | 2021-02-28T02:47:07Z | 2021-03-11T02:18:17Z | 2021-03-03T16:37:50Z | null | Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1962/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1962.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1962",
"merged_at": "2021-03-03T16:37:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1962.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1962"
} | true | [
"@lhoestq Re-added the arg. The ConnectionError in CI seems unrelated to this PR (the same test fails on master as well).",
"Thanks !\r\nI'm re-running the CI, maybe this was an issue with circleCI",
"Looks all good now, merged :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/6036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6036/comments | https://api.github.com/repos/huggingface/datasets/issues/6036/events | https://github.com/huggingface/datasets/pull/6036 | 1,805,138,898 | PR_kwDODunzps5ViKc4 | 6,036 | Deprecate search API | [] | open | false | null | 8 | 2023-07-14T16:22:09Z | 2023-07-21T19:53:51Z | null | null | The Search API only supports Faiss and ElasticSearch as vector stores, is somewhat difficult to maintain (e.g., it still doesn't support ElasticSeach 8.0, difficult testing, ...), does not have the best design (adds a bunch of methods to the `Dataset` class that are only useful after creating an index), the usage doesn't seem to be significant and is not integrated with the Hub. Since we have no plans/bandwidth to improve it and better alternatives such as `langchain` and `docarray` exist, I think it should be deprecated (and eventually removed).
If we decide to deprecate/remove it, the following usage instances need to be addressed:
* [Course](https://github.com/huggingface/course/blob/0018bb434204d9750a03592cb0d4e846093218d8/chapters/en/chapter5/6.mdx#L342 ) and [Blog](https://github.com/huggingface/blog/blob/4897c6f73d4492a0955ade503281711d01840e09/image-search-datasets.md?plain=1#L252) - calling the FAISS API directly should be OK in these instances as it's pretty simple to use for basic scenarios. Alternatively, we can use `langchain`, but this adds an extra dependency
* [Transformers](https://github.com/huggingface/transformers/blob/50726f9ea7afc6113da617f8f4ca1ab264a5e28a/src/transformers/models/rag/retrieval_rag.py#L183) - we can use the FAISS API directly and store the index as a separate attribute (and instead of building the `wiki_dpr` index each time the dataset is generated, we can generate it once and push it to the Hub repo, and then read it from there
cc @huggingface/datasets @LysandreJik for the opinion | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6036/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6036",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6036"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/5360 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5360/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5360/comments | https://api.github.com/repos/huggingface/datasets/issues/5360/events | https://github.com/huggingface/datasets/issues/5360 | 1,496,947,177 | I_kwDODunzps5ZOZnp | 5,360 | IterableDataset returns duplicated data using PyTorch DDP | [] | closed | false | null | 11 | 2022-12-14T16:06:19Z | 2023-06-15T09:51:13Z | 2023-01-16T13:33:33Z | null | As mentioned in https://github.com/huggingface/datasets/issues/3423, when using PyTorch DDP the dataset ends up with duplicated data. We already check for the PyTorch `worker_info` for single node, but we should also check for `torch.distributed.get_world_size()` and `torch.distributed.get_rank()` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5360/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5360/timeline | null | completed | null | null | false | [
"If you use huggingface trainer, you will find the trainer has wrapped a `IterableDatasetShard` to avoid duplication.\r\nSee:\r\nhttps://github.com/huggingface/transformers/blob/dfd818420dcbad68e05a502495cf666d338b2bfb/src/transformers/trainer.py#L835\r\n",
"If you want to support it by datasets natively, maybe w... |
https://api.github.com/repos/huggingface/datasets/issues/632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/632/comments | https://api.github.com/repos/huggingface/datasets/issues/632/events | https://github.com/huggingface/datasets/pull/632 | 702,358,124 | MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2 | 632 | Fix typos in the loading datasets docs | [] | closed | false | null | 1 | 2020-09-16T00:27:41Z | 2020-09-21T16:31:11Z | 2020-09-16T06:52:44Z | null | This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/632/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/632.diff",
"html_url": "https://github.com/huggingface/datasets/pull/632",
"merged_at": "2020-09-16T06:52:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/632.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/632"
} | true | [
"thanks!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2575/comments | https://api.github.com/repos/huggingface/datasets/issues/2575/events | https://github.com/huggingface/datasets/pull/2575 | 934,876,496 | MDExOlB1bGxSZXF1ZXN0NjgxODg0OTgy | 2,575 | Add C4 | [] | closed | false | null | 0 | 2021-07-01T13:58:08Z | 2021-07-02T14:50:23Z | 2021-07-02T14:50:23Z | null | The old code for the C4 dataset was to generate the C4 with Apache Beam, as in Tensorflow Datasets.
However AllenAI is now hosting the processed C4 dataset in this repo: https://huggingface.co/datasets/allenai/c4
Thanks a lot to them for their amazing work !
In this PR I changed the script to download and prepare the data directly from this repo.
It has 4 variants: en, en.noblocklist, en.noclean, realnewslike
You can load it with
```python
from datasets import load_dataset
c4 = load_dataset("c4", "en")
```
It also supports streaming, if you don't want to download hundreds of GB of data:
```python
c4 = load_dataset("c4", "en", streaming=True)
```
Regarding the dataset_infos.json, I haven't added the infos for en.noclean. I will add them once I have them.
Also we can work on the dataset card at https://huggingface.co/datasets/c4
For now I just added a link to https://huggingface.co/datasets/allenai/c4 as well as a few sections | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2575/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2575",
"merged_at": "2021-07-02T14:50:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2575"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/990/comments | https://api.github.com/repos/huggingface/datasets/issues/990/events | https://github.com/huggingface/datasets/pull/990 | 755,097,798 | MDExOlB1bGxSZXF1ZXN0NTMwODc1NDYx | 990 | Add E2E NLG | [] | closed | false | null | 0 | 2020-12-02T09:25:12Z | 2020-12-03T13:08:05Z | 2020-12-03T13:08:04Z | null | Adding the E2E NLG dataset.
More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/990/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/990.diff",
"html_url": "https://github.com/huggingface/datasets/pull/990",
"merged_at": "2020-12-03T13:08:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/990.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/990"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1032/comments | https://api.github.com/repos/huggingface/datasets/issues/1032/events | https://github.com/huggingface/datasets/pull/1032 | 755,858,785 | MDExOlB1bGxSZXF1ZXN0NTMxNDk2MTU2 | 1,032 | IIT B English to Hindi machine translation dataset | [] | closed | false | null | 5 | 2020-12-03T05:18:45Z | 2021-01-10T08:44:51Z | 2021-01-10T08:44:15Z | null | Adding IIT Bombay English-Hindi Corpus dataset
more info : http://www.cfilt.iitb.ac.in/iitb_parallel/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1032/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1032.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1032",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1032.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1032"
} | true | [
"Please note that this dataset is actually behind a form that one needs to fill. However, the link is direct. I'm not sure what should the approach be in this case.",
"also pinging @thomwolf \r\nThe dataset webpage returns a form when trying to download the dataset (form here : http://www.cfilt.iitb.ac.in/iitb_pa... |
https://api.github.com/repos/huggingface/datasets/issues/1638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1638/comments | https://api.github.com/repos/huggingface/datasets/issues/1638/events | https://github.com/huggingface/datasets/pull/1638 | 774,869,184 | MDExOlB1bGxSZXF1ZXN0NTQ1Njg5ODQ5 | 1,638 | Add id_puisi dataset | [] | closed | false | null | 0 | 2020-12-26T12:41:55Z | 2020-12-30T16:34:17Z | 2020-12-30T16:34:17Z | null | Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1638/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1638.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1638",
"merged_at": "2020-12-30T16:34:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1638.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1638"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/77 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/77/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/77/comments | https://api.github.com/repos/huggingface/datasets/issues/77/events | https://github.com/huggingface/datasets/pull/77 | 616,674,601 | MDExOlB1bGxSZXF1ZXN0NDE2NzQwMjAz | 77 | New datasets | [] | closed | false | null | 0 | 2020-05-12T13:51:59Z | 2020-05-12T14:02:16Z | 2020-05-12T14:02:15Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/77/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/77/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/77.diff",
"html_url": "https://github.com/huggingface/datasets/pull/77",
"merged_at": "2020-05-12T14:02:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/77.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/77"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/5410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5410/comments | https://api.github.com/repos/huggingface/datasets/issues/5410/events | https://github.com/huggingface/datasets/pull/5410 | 1,521,168,032 | PR_kwDODunzps5GvnJH | 5,410 | Map-style Dataset to IterableDataset | [] | closed | false | null | 22 | 2023-01-05T18:12:17Z | 2023-02-01T18:11:45Z | 2023-02-01T16:36:01Z | null | Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset.
It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets.
TODO:
- [x] tests
- [x] docs
Fix https://github.com/huggingface/datasets/issues/5265 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5410/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5410/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5410.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5410",
"merged_at": "2023-02-01T16:36:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5410.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5410"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/5597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5597/comments | https://api.github.com/repos/huggingface/datasets/issues/5597/events | https://github.com/huggingface/datasets/issues/5597 | 1,604,928,721 | I_kwDODunzps5fqUTR | 5,597 | in-place dataset update | [
{
"color": "ffffff",
"default": true,
"description": "This will not be worked on",
"id": 1935892913,
"name": "wontfix",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix"
}
] | closed | false | null | 3 | 2023-03-01T12:58:18Z | 2023-03-02T13:30:41Z | 2023-03-02T03:47:00Z | null | ### Motivation
For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds = ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Feature request
Call for in-place dataset update functions, that update the existing `Dataset` in place without creating a new copy. The interface is supposed to keep the same style as PyTorch, such as the in-place version of a `function` is named `function_`. For example, the in-pace version of `add_item`, i.e., `add_item_`, immediately updates the `Dataset`.
```python
from datasets import Dataset
ds = Dataset.from_list([])
ds.add_item({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: [],
>>> num_rows: 0
>>> })
ds.add_item_({'a': [1, 2, 3], 'b': 4})
print(ds)
>>> Dataset({
>>> features: ['a', 'b'],
>>> num_rows: 1
>>> })
```
### Related Functions
* `.map`
* `.filter`
* `.add_item` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5597/timeline | null | completed | null | null | false | [
"We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not load... |
https://api.github.com/repos/huggingface/datasets/issues/2694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2694/comments | https://api.github.com/repos/huggingface/datasets/issues/2694/events | https://github.com/huggingface/datasets/pull/2694 | 949,844,722 | MDExOlB1bGxSZXF1ZXN0Njk0NDg0NTcy | 2,694 | fix: 🐛 change string format to allow copy/paste to work in bash | [] | closed | false | null | 0 | 2021-07-21T15:30:40Z | 2021-07-22T10:41:47Z | 2021-07-22T10:41:47Z | null | Before: copy/paste resulted in an error because the square bracket
characters `[]` are special characters in bash | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2694/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2694/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2694.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2694",
"merged_at": "2021-07-22T10:41:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2694.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2694"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1700/comments | https://api.github.com/repos/huggingface/datasets/issues/1700/events | https://github.com/huggingface/datasets/pull/1700 | 781,333,589 | MDExOlB1bGxSZXF1ZXN0NTUxMDc1NTg2 | 1,700 | Update Curiosity dialogs DatasetCard | [] | closed | false | null | 0 | 2021-01-07T13:59:27Z | 2021-01-12T18:51:32Z | 2021-01-12T18:51:32Z | null | Update Curiosity dialogs DatasetCard
There are some entries in the data fields section yet to be filled. There is little information regarding those fields. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1700/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1700/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1700.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1700",
"merged_at": "2021-01-12T18:51:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1700.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1700"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4459/comments | https://api.github.com/repos/huggingface/datasets/issues/4459/events | https://github.com/huggingface/datasets/pull/4459 | 1,264,636,481 | PR_kwDODunzps45UFc8 | 4,459 | Add and fix language tags for udhr dataset | [] | closed | false | null | 1 | 2022-06-08T12:03:42Z | 2022-06-08T12:36:24Z | 2022-06-08T12:27:13Z | null | Related to #4362. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4459/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4459",
"merged_at": "2022-06-08T12:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4459"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2973 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2973/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2973/comments | https://api.github.com/repos/huggingface/datasets/issues/2973/events | https://github.com/huggingface/datasets/pull/2973 | 1,007,894,592 | PR_kwDODunzps4sTRvk | 2,973 | Fix JSON metadata of masakhaner dataset | [] | closed | false | null | 0 | 2021-09-27T09:09:08Z | 2021-09-27T12:59:59Z | 2021-09-27T12:59:59Z | null | Fix #2971. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2973/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2973/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2973.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2973",
"merged_at": "2021-09-27T12:59:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2973.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2973"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/445 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/445/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/445/comments | https://api.github.com/repos/huggingface/datasets/issues/445/events | https://github.com/huggingface/datasets/issues/445 | 666,836,658 | MDU6SXNzdWU2NjY4MzY2NTg= | 445 | DEFAULT_TOKENIZER import error in sacrebleu | [] | closed | false | null | 1 | 2020-07-28T07:31:30Z | 2020-07-28T12:58:56Z | 2020-07-28T12:58:56Z | null | Latest Version 0.3.0
When loading the metric "sacrebleu" there is an import error due to the wrong path

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/445/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/445/timeline | null | completed | null | null | false | [
"This issue was resolved by #447 "
] |
https://api.github.com/repos/huggingface/datasets/issues/2879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2879/comments | https://api.github.com/repos/huggingface/datasets/issues/2879/events | https://github.com/huggingface/datasets/issues/2879 | 990,257,404 | MDU6SXNzdWU5OTAyNTc0MDQ= | 2,879 | In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?" | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-09-07T18:53:45Z | 2021-09-08T16:55:19Z | 2021-09-08T09:12:28Z | null | ## Describe the bug
Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same.
## Steps to reproduce the bug
I was following this tutorial
- https://huggingface.co/blog/fine-tune-wav2vec2-english
But here's a distilled repro:
```python
!pip install datasets==1.4.1
from datasets import load_dataset
timit = load_dataset("timit_asr", cache_dir="./temp")
unique_transcripts = set(timit["train"]["text"])
print(unique_transcripts)
assert len(unique_transcripts) > 1
```
## Expected results
Expected the correct TIMIT data. Or an error saying that this version of `datasets` can't produce it.
## Actual results
Every train transcript was "Would such an act of refusal be useful?" Every test transcript was "The bungalow was pleasantly situated near the shore."
## Environment info
- `datasets` version: 1.4.1
- Platform: Darwin-18.7.0-x86_64-i386-64bit
- Python version: 3.7.9
- PyTorch version (GPU?): 1.9.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: tried both
- Using distributed or parallel set-up in script?: no
-
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2879/timeline | null | completed | null | null | false | [
"Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that shoul... |
https://api.github.com/repos/huggingface/datasets/issues/1013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1013/comments | https://api.github.com/repos/huggingface/datasets/issues/1013/events | https://github.com/huggingface/datasets/pull/1013 | 755,493,075 | MDExOlB1bGxSZXF1ZXN0NTMxMTkzMTcy | 1,013 | Adding CS restaurants dataset | [] | closed | false | null | 0 | 2020-12-02T18:02:30Z | 2020-12-02T18:25:20Z | 2020-12-02T18:25:19Z | null | This PR adds the CS restaurants dataset; this is a re-opening of a previous PR with a chaotic commit history. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1013/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1013.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1013",
"merged_at": "2020-12-02T18:25:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1013.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1013"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5113/comments | https://api.github.com/repos/huggingface/datasets/issues/5113/events | https://github.com/huggingface/datasets/pull/5113 | 1,409,207,607 | PR_kwDODunzps5Az0Ei | 5,113 | Fix filter indices when batched | [] | closed | false | null | 3 | 2022-10-14T11:30:03Z | 2022-10-24T06:21:09Z | 2022-10-14T12:11:44Z | null | This PR fixes a bug introduced by:
- #5030
Fix #5112. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5113/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5113.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5113",
"merged_at": "2022-10-14T12:11:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5113.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5113"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think a patch release will be necessary.",
"I'm also fixing https://github.com/huggingface/datasets/issues/5111 which will lalso require a patch release"
] |
https://api.github.com/repos/huggingface/datasets/issues/2212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2212/comments | https://api.github.com/repos/huggingface/datasets/issues/2212/events | https://github.com/huggingface/datasets/issues/2212 | 855,999,133 | MDU6SXNzdWU4NTU5OTkxMzM= | 2,212 | Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | [] | open | false | null | 4 | 2021-04-12T13:49:56Z | 2021-05-17T22:17:06Z | null | null | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, post-processed: Unknown size, total: 9.76 MiB) to /root/.cache/huggingface/datasets/fquad/default/0.1.0/778dc2c85813d05ddd0c17087294d5f8f24820752340958070876b677af9f061...
---------------------------------------------------------------------------
ConnectionError Traceback (most recent call last)
<ipython-input-48-a2721797e23b> in <module>()
----> 1 fquad = load_dataset("fquad")
11 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
--> 616 raise ConnectionError("Couldn't reach {}".format(url))
617
618 # Try a second time
ConnectionError: Couldn't reach https://storage.googleapis.com/illuin/fquad/train.json.zip
```
Does anyone know why that is and how to fix it? | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2212/timeline | null | null | null | null | false | [
"Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available",
"I saw this on their website when we request to download the dataset:\r\n\r\n\r\... |
https://api.github.com/repos/huggingface/datasets/issues/1437 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1437/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1437/comments | https://api.github.com/repos/huggingface/datasets/issues/1437/events | https://github.com/huggingface/datasets/pull/1437 | 760,891,879 | MDExOlB1bGxSZXF1ZXN0NTM1NjQwODE0 | 1,437 | Add Indosum dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 2 | 2020-12-10T05:02:00Z | 2022-10-03T09:38:54Z | 2022-10-03T09:38:54Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1437/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1437/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1437.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1437",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1437.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1437"
} | true | [
"Hi @prasastoadi have you had a chance to take a look at my suggestions ?\r\n\r\nFeel free to ping ;e if you have questions or when you're ready for a review",
"Thanks for your contribution, @prasastoadi. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub ... |
https://api.github.com/repos/huggingface/datasets/issues/3909 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3909/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3909/comments | https://api.github.com/repos/huggingface/datasets/issues/3909/events | https://github.com/huggingface/datasets/issues/3909 | 1,168,578,058 | I_kwDODunzps5FpxYK | 3,909 | Error loading file audio when downloading the Common Voice dataset directly from the Hub | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 8 | 2022-03-14T15:53:50Z | 2023-03-02T15:31:27Z | 2023-03-02T15:31:26Z | null | ## Describe the bug
When loading the Common_Voice dataset, by downloading it directly from the Hugging Face hub, some files can not be opened.
## Steps to reproduce the bug
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
test_dataset = load_dataset("common_voice", "it", split="test")
#test_dataset = load_dataset('csv', data_files = {'test': '/workspace/Dataset/Common_Voice/cv-corpus80/it/test.csv'})
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model = Wav2Vec2ForCTC.from_pretrained("joorock12/wav2vec2-large-xlsr-italian")
model.to("cuda")
chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\'\�]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
```
## Expected results
The common voice dataset downloaded and correctly loaded whit the use of the hugging face datasets library.
## Actual results
The error is:
```python
0ex [00:00, ?ex/s]
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-ef87f4129e6e> in <module>
7 return batch
8
----> 9 test_dataset = test_dataset.map(speech_file_to_array_fn)
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2107
2108 if num_proc is None or num_proc == 1:
-> 2109 return self._map_single(
2110 function=function,
2111 with_indices=with_indices,
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
516 self: "Dataset" = kwargs.pop("self")
517 # apply actual function
--> 518 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
519 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
520 for dataset in datasets:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
483 }
484 # apply actual function
--> 485 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
486 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
487 # re-apply format to the output
/opt/conda/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
411 # Call actual function
412
--> 413 out = func(self, *args, **kwargs)
414
415 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2465 if not batched:
2466 for i, example in enumerate(pbar):
-> 2467 example = apply_function_on_filtered_inputs(example, i, offset=offset)
2468 if update_data:
2469 if i == 0:
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
2372 if with_rank:
2373 additional_args += (rank,)
-> 2374 processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
2375 if update_data is None:
2376 # Check if the function returns updated examples
/opt/conda/lib/python3.8/site-packages/datasets/arrow_dataset.py in decorated(item, *args, **kwargs)
2067 )
2068 # Use the LazyDict internally, while mapping the function
-> 2069 result = f(decorated_item, *args, **kwargs)
2070 # Return a standard dict
2071 return result.data if isinstance(result, LazyDict) else result
<ipython-input-48-ef87f4129e6e> in speech_file_to_array_fn(batch)
3 def speech_file_to_array_fn(batch):
4 batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
----> 5 speech_array, sampling_rate = torchaudio.load(batch["path"])
6 batch["speech"] = resampler(speech_array).squeeze().numpy()
7 return batch
/opt/conda/lib/python3.8/site-packages/torchaudio/backend/sox_io_backend.py in load(filepath, frame_offset, num_frames, normalize, channels_first, format)
150 filepath, frame_offset, num_frames, normalize, channels_first, format)
151 filepath = os.fspath(filepath)
--> 152 return torch.ops.torchaudio.sox_io_load_audio_file(
153 filepath, frame_offset, num_frames, normalize, channels_first, format)
154
RuntimeError: Error loading audio file: failed to open file common_voice_it_17415776.mp3 ```
## Environment info
- `datasets` version: 1.18.4
- Platform: Linux-5.4.0-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyArrow version: 7.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3909/timeline | null | completed | null | null | false | [
"Hi ! It could an issue with torchaudio, which version of torchaudio are you using ? Can you also try updating `datasets` to 2.0.0 and see if it works ?",
"I _might_ have a similar issue. I'm trying to use the librispeech_asr dataset and read it with soundfile.\r\n\r\n```python\r\nfrom datasets import load_datase... |
https://api.github.com/repos/huggingface/datasets/issues/3138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3138/comments | https://api.github.com/repos/huggingface/datasets/issues/3138/events | https://github.com/huggingface/datasets/issues/3138 | 1,033,379,997 | I_kwDODunzps49mCCd | 3,138 | More fine-grained taxonomy of error types | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": fals... | open | false | null | 1 | 2021-10-22T09:35:29Z | 2022-09-20T13:04:42Z | null | null | **Is your feature request related to a problem? Please describe.**
Exceptions like `FileNotFoundError` can be raised by different parts of the code, and it's hard to detect which one did
**Describe the solution you'd like**
Give a specific exception type for every group of similar errors
**Describe alternatives you've considered**
Rely on the error message, using regex
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3138/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3138/timeline | null | null | null | null | false | [
"related: #4995\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1486 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1486/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1486/comments | https://api.github.com/repos/huggingface/datasets/issues/1486/events | https://github.com/huggingface/datasets/pull/1486 | 762,790,102 | MDExOlB1bGxSZXF1ZXN0NTM3MzAxODY2 | 1,486 | hate speech 18 dataset | [] | closed | false | null | 2 | 2020-12-11T19:22:14Z | 2020-12-14T19:43:18Z | 2020-12-14T19:43:18Z | null | This is again a PR instead of #1339, because something went wrong there. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1486/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1486/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1486.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1486",
"merged_at": "2020-12-14T19:43:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1486.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1486"
} | true | [
"The error `tests/test_file_utils.py::TempSeedTest::test_tensorflow` just appeared because of tensorflow's update.\r\nOnce it's fixed on master we'll be free to merge this one",
"It's fixed on master now :) \r\n\r\nmerging this once"
] |
https://api.github.com/repos/huggingface/datasets/issues/3587 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3587/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3587/comments | https://api.github.com/repos/huggingface/datasets/issues/3587/events | https://github.com/huggingface/datasets/issues/3587 | 1,106,719,182 | I_kwDODunzps5B9zHO | 3,587 | No module named 'fsspec.archive' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-01-18T10:17:01Z | 2022-08-11T09:57:54Z | 2022-01-18T10:33:10Z | null | ## Describe the bug
Cannot import datasets after installation.
## Steps to reproduce the bug
```shell
$ python
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import datasets
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 61, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/arrow_writer.py", line 28, in <module>
from .features import (
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/__init__.py", line 2, in <module>
from .audio import Audio
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/features/audio.py", line 7, in <module>
from ..utils.streaming_download_manager import xopen
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/utils/streaming_download_manager.py", line 18, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/__init__.py", line 6, in <module>
from . import compression
File "/home/shuchen/miniconda3/envs/hf/lib/python3.9/site-packages/datasets/filesystems/compression.py", line 5, in <module>
from fsspec.archive import AbstractArchiveFileSystem
ModuleNotFoundError: No module named 'fsspec.archive'
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3587/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3587/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/347/comments | https://api.github.com/repos/huggingface/datasets/issues/347/events | https://github.com/huggingface/datasets/issues/347 | 652,106,567 | MDU6SXNzdWU2NTIxMDY1Njc= | 347 | 'cp950' codec error from load_dataset('xtreme', 'tydiqa') | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 10 | 2020-07-07T08:14:23Z | 2020-09-07T14:51:45Z | 2020-09-07T14:51:45Z | null | 
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51)
Any ideas?
p.s. tried the same code on colab, that runs perfectly
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/347/timeline | null | completed | null | null | false | [
"This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ",
"It should be in `xtreme.py:L755`:\r\n```python\r\n ... |
https://api.github.com/repos/huggingface/datasets/issues/27 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/27/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/27/comments | https://api.github.com/repos/huggingface/datasets/issues/27/events | https://github.com/huggingface/datasets/pull/27 | 610,230,476 | MDExOlB1bGxSZXF1ZXN0NDExNzA5OTc0 | 27 | [Cleanup] Removes all files in testing except test_dataset_common | [] | closed | false | null | 0 | 2020-04-30T16:45:21Z | 2020-04-30T17:39:25Z | 2020-04-30T17:39:23Z | null | As far as I know, all files in `tests` were old `tfds test files` so I removed them. We can still look them up on the other library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/27/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/27/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/27.diff",
"html_url": "https://github.com/huggingface/datasets/pull/27",
"merged_at": "2020-04-30T17:39:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/27.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/27"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1616 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1616/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1616/comments | https://api.github.com/repos/huggingface/datasets/issues/1616/events | https://github.com/huggingface/datasets/pull/1616 | 772,074,229 | MDExOlB1bGxSZXF1ZXN0NTQzNDEwNDc1 | 1,616 | added TurkishMovieSentiment dataset | [] | closed | false | null | 1 | 2020-12-21T11:03:16Z | 2020-12-24T07:08:41Z | 2020-12-23T16:50:06Z | null | This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.**
- **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks)
- **Point of Contact:** [Mustafa Keskin](https://www.linkedin.com/in/mustfkeskin/) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1616/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1616/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1616.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1616",
"merged_at": "2020-12-23T16:50:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1616.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1616"
} | true | [
"> I just generated the dataset_infos.json file\r\n> \r\n> Thanks for adding this one !\r\n\r\nThank you very much for your support."
] |
https://api.github.com/repos/huggingface/datasets/issues/5364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5364/comments | https://api.github.com/repos/huggingface/datasets/issues/5364/events | https://github.com/huggingface/datasets/pull/5364 | 1,498,360,628 | PR_kwDODunzps5Fiss1 | 5,364 | Support for writing arrow files directly with BeamWriter | [] | open | false | null | 4 | 2022-12-15T12:38:05Z | 2023-01-25T15:49:25Z | null | null | Make it possible to write Arrow files directly with `BeamWriter` rather than converting from Parquet to Arrow, which is sub-optimal, especially for big datasets for which Beam is primarily used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5364/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/5364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5364",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5364"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5364). All of your documentation changes will be reflected on that endpoint.",
"Deleting `BeamPipeline` and `upload_local_to_remote` would break the existing Beam scripts, so I reverted this change.\r\n\r\nFrom what I understan... |
https://api.github.com/repos/huggingface/datasets/issues/1627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1627/comments | https://api.github.com/repos/huggingface/datasets/issues/1627/events | https://github.com/huggingface/datasets/issues/1627 | 773,960,255 | MDU6SXNzdWU3NzM5NjAyNTU= | 1,627 | `Dataset.map` disable progress bar | [] | closed | false | null | 2 | 2020-12-23T17:53:42Z | 2023-02-08T02:37:47Z | 2020-12-26T19:57:17Z | null | I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that? | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1627/timeline | null | completed | null | null | false | [
"Progress bar can be disabled like this:\r\n```python\r\nfrom datasets.utils.logging import set_verbosity_error\r\nset_verbosity_error()\r\n```\r\n\r\nThere is this line in `Dataset.map`:\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nSo any logging level higher than `WARNING... |
https://api.github.com/repos/huggingface/datasets/issues/5983 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5983/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5983/comments | https://api.github.com/repos/huggingface/datasets/issues/5983/events | https://github.com/huggingface/datasets/pull/5983 | 1,770,578,804 | PR_kwDODunzps5TtDdy | 5,983 | replaced PathLike as a variable for save_to_disk for dataset_path wit… | [] | open | false | null | 0 | 2023-06-23T00:57:05Z | 2023-06-23T00:57:05Z | null | null | …h str like that of load_from_disk | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5983/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5983/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5983.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5983",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5983.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5983"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1368 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1368/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1368/comments | https://api.github.com/repos/huggingface/datasets/issues/1368/events | https://github.com/huggingface/datasets/pull/1368 | 760,222,616 | MDExOlB1bGxSZXF1ZXN0NTM1MDkwMjM0 | 1,368 | Re-adding narrativeqa dataset | [] | closed | false | null | 4 | 2020-12-09T10:53:09Z | 2020-12-11T13:30:59Z | 2020-12-11T13:30:59Z | null | An update of #309. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1368/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1368/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1368.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1368",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1368.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1368"
} | true | [
"@lhoestq I think I've fixed the dummy data - it finally passes! I'll add the model card now.",
"@lhoestq - pretty happy with it now",
"> Awesome thank you !\r\n> \r\n> Could you try to reduce the size of the dummy_data.zip file before we merge ? (it's 300KB right now)\r\n> \r\n> To do so feel free to take a lo... |
https://api.github.com/repos/huggingface/datasets/issues/4245 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4245/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4245/comments | https://api.github.com/repos/huggingface/datasets/issues/4245/events | https://github.com/huggingface/datasets/pull/4245 | 1,217,959,400 | PR_kwDODunzps426AUR | 4,245 | Add code examples for DatasetDict | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-04-27T22:52:22Z | 2022-04-29T18:19:34Z | 2022-04-29T18:13:03Z | null | This PR adds code examples for `DatasetDict` in the API reference :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4245/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4245/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4245.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4245",
"merged_at": "2022-04-29T18:13:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4245.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4245"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5855/comments | https://api.github.com/repos/huggingface/datasets/issues/5855/events | https://github.com/huggingface/datasets/issues/5855 | 1,708,784,943 | I_kwDODunzps5l2f0v | 5,855 | `to_tf_dataset` consumes too much memory | [] | closed | false | null | 6 | 2023-05-14T01:22:29Z | 2023-06-08T16:32:52Z | 2023-06-08T16:32:52Z | null | ### Describe the bug
Hi, I'm using `to_tf_dataset` to convert a _large_ dataset to `tf.data.Dataset`. I observed that the data loading *before* training took a lot of time and memory, even with `batch_size=1`.
After some digging, i believe the reason lies in the shuffle behavior. The [source code](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/tf_utils.py#L185) uses `len(dataset)` as the `buffer_size`, which may load all the data into the memory, and the [tf.data doc](https://www.tensorflow.org/guide/data#randomly_shuffling_input_data) also states that "While large buffer_sizes shuffle more thoroughly, they can take a lot of memory, and significant time to fill".
### Steps to reproduce the bug
```python
from datasets import Dataset
def gen(): # some large data
for i in range(50000000):
yield {"data": i}
ds = Dataset.from_generator(gen, cache_dir="./huggingface")
tf_ds = ds.to_tf_dataset(
batch_size=64,
shuffle=False, # no shuffle
drop_remainder=False,
prefetch=True,
)
# fast and memory friendly 🤗
for batch in tf_ds:
...
tf_ds_shuffle = ds.to_tf_dataset(
batch_size=64,
shuffle=True,
drop_remainder=False,
prefetch=True,
)
# slow and memory hungry for simple iteration 😱
for batch in tf_ds_shuffle:
...
```
### Expected behavior
Shuffling should not load all the data into the memory. Would adding a `buffer_size` parameter in the `to_tf_dataset` API alleviate the problem?
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.17.1-051701-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5855/timeline | null | completed | null | null | false | [
"Cc @amyeroberts @Rocketknight1 \r\n\r\nIndded I think it's because it does something like this under the hood when there's no multiprocessing:\r\n\r\n```python\r\ntf_dataset = tf_dataset.shuffle(len(dataset))\r\n```\r\n\r\nPS: with multiprocessing it appears to be different:\r\n\r\n```python\r\nindices = np.arange... |
https://api.github.com/repos/huggingface/datasets/issues/984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/984/comments | https://api.github.com/repos/huggingface/datasets/issues/984/events | https://github.com/huggingface/datasets/pull/984 | 755,009,916 | MDExOlB1bGxSZXF1ZXN0NTMwODAzNzgw | 984 | committing Whoa file | [] | closed | false | null | 2 | 2020-12-02T07:07:46Z | 2020-12-02T16:15:29Z | 2020-12-02T15:40:58Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/984/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/984.diff",
"html_url": "https://github.com/huggingface/datasets/pull/984",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/984.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/984"
} | true | [
"can't find the Whoa file since there' nothing left",
"The classic `rm -rf` command - nice one"
] | |
https://api.github.com/repos/huggingface/datasets/issues/3328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3328/comments | https://api.github.com/repos/huggingface/datasets/issues/3328/events | https://github.com/huggingface/datasets/pull/3328 | 1,065,015,262 | PR_kwDODunzps4vFTpW | 3,328 | Quick fix error formatting | [] | closed | false | null | 0 | 2021-11-27T11:47:48Z | 2021-11-29T13:32:42Z | 2021-11-29T13:32:42Z | null | While working on a dataset, I got the error
```
TypeError: Provided `function` which is applied to all elements of table returns a `dict` of types {[type(x) for x in processed_inputs.values()]}. When using `batched=True`, make sure provided `function` returns a `dict` of types like `{allowed_batch_return_types}`.
```
This PR should fix the formatting of this error | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3328/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3328",
"merged_at": "2021-11-29T13:32:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3328"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4472/comments | https://api.github.com/repos/huggingface/datasets/issues/4472/events | https://github.com/huggingface/datasets/pull/4472 | 1,267,488,523 | PR_kwDODunzps45drcb | 4,472 | Fix 401 error for unauthticated requests to non-existing repos | [] | closed | false | null | 1 | 2022-06-10T12:38:11Z | 2022-06-10T13:05:11Z | 2022-06-10T12:55:57Z | null | The hub now returns 401 instead of 404 for unauthenticated requests to non-existing repos.
This PR add support for the 401 error and fixes the CI fails on `master` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4472/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4472",
"merged_at": "2022-06-10T12:55:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4472"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2020/comments | https://api.github.com/repos/huggingface/datasets/issues/2020/events | https://github.com/huggingface/datasets/pull/2020 | 826,961,126 | MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx | 2,020 | Remove unnecessary docstart check in conll-like datasets | [] | closed | false | null | 0 | 2021-03-10T02:20:16Z | 2021-03-11T13:33:37Z | 2021-03-11T13:33:37Z | null | Related to this PR: #1998
Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2020/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2020",
"merged_at": "2021-03-11T13:33:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2020"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3631/comments | https://api.github.com/repos/huggingface/datasets/issues/3631/events | https://github.com/huggingface/datasets/issues/3631 | 1,114,833,662 | I_kwDODunzps5CcwL- | 3,631 | Labels conflict when loading a local CSV file. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-01-26T10:00:33Z | 2022-02-11T23:02:31Z | 2022-02-11T23:02:31Z | null | ## Describe the bug
I am trying to load a local CSV file with a separate file containing label names. It is successfully loaded for the first time, but when I try to load it again, there is a conflict between provided labels and the cached dataset info. Disabling caching globally and/or using `download_mode="force_redownload"` did not help.
## Steps to reproduce the bug
```python
load_dataset('csv', data_files='data/my_data.csv',
features=Features(text=Value(dtype='string'),
label=ClassLabel(names_file='data/my_data_labels.txt')))
```
`my_data.csv` file has the following structure:
```
text,label
"example1",0
"example2",1
...
```
and the `my_data_labels.txt` looks like this:
```
label1
label2
...
```
## Expected results
Successfully loaded dataset.
## Actual results
```python
File "/usr/local/lib/python3.8/site-packages/datasets/load.py", line 1706, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 766, in as_dataset
datasets = utils.map_nested(
File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 261, in map_nested
mapped = [
File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 262, in <listcomp>
_single_map_nested((function, obj, types, None, True))
File "/usr/local/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 197, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 797, in _build_single_dataset
ds = self._as_dataset(
File "/usr/local/lib/python3.8/site-packages/datasets/builder.py", line 872, in _as_dataset
return Dataset(fingerprint=fingerprint, **dataset_kwargs)
File "/usr/local/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 638, in __init__
inferred_features = Features.from_arrow_schema(arrow_table.schema)
File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1242, in from_arrow_schema
return Features.from_dict(metadata["info"]["features"])
File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1271, in from_dict
obj = generate_from_dict(dic)
File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in generate_from_dict
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1076, in <dictcomp>
return {key: generate_from_dict(value) for key, value in obj.items()}
File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 1083, in generate_from_dict
return class_type(**{k: v for k, v in obj.items() if k in field_names})
File "<string>", line 7, in __init__
File "/usr/local/lib/python3.8/site-packages/datasets/features/features.py", line 776, in __post_init__
raise ValueError("Please provide either names or names_file but not both.")
ValueError: Please provide either names or names_file but not both.
```
## Environment info
- `datasets` version: 1.18.0
- Python version: 3.8.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3631/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3631/timeline | null | completed | null | null | false | [
"Hi @pichljan, thanks for reporting.\r\n\r\nThis should be fixed. I'm looking at it. "
] |
https://api.github.com/repos/huggingface/datasets/issues/3482 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3482/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3482/comments | https://api.github.com/repos/huggingface/datasets/issues/3482/events | https://github.com/huggingface/datasets/pull/3482 | 1,088,317,921 | PR_kwDODunzps4wQqE1 | 3,482 | Fix duplicate keys in NewsQA | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 2 | 2021-12-24T11:01:59Z | 2022-09-23T12:57:10Z | 2022-09-23T12:57:10Z | null | * Fix duplicate keys in NewsQA when loading from CSV files.
* Fix s/narqa/newsqa/ in the download manually error message.
* Make the download manually error message show nicely when printed. Otherwise, is hard to read due to spacing issues.
* Fix the format of the license text.
* Reformat the code to make it simpler. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3482/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3482/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3482.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3482",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3482.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3482"
} | true | [
"Flaky tests?",
"Thanks for your contribution, @bryant1410.\r\n\r\nI think the fix of the duplicate key in this PR was superseded by:\r\n- #3696\r\n\r\nI'm closing this because we are moving all dataset scripts from GitHub to the Hugging Face Hub."
] |
https://api.github.com/repos/huggingface/datasets/issues/5161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5161/comments | https://api.github.com/repos/huggingface/datasets/issues/5161/events | https://github.com/huggingface/datasets/issues/5161 | 1,422,371,748 | I_kwDODunzps5Ux6uk | 5,161 | Dataset can’t cache model’s outputs | [] | closed | false | null | 1 | 2022-10-25T12:19:00Z | 2022-11-03T16:12:52Z | 2022-11-03T16:12:51Z | null | ### Describe the bug
Hi,
I try to cache some outputs of teacher model( Knowledge Distillation ) by using map function of Dataset library, while every time I run my code, I still recompute all the sequences. I tested Bert Model like this, I got different hash every single run, so any idea to deal with this?
### Steps to reproduce the bug
1. run below code
2. get different hash
```
from transformers import BertModel
from transformers import AutoTokenizer
import torch
token = ['hello']
model = BertModel.from_pretrained("bert-base-uncased").eval()
tok = AutoTokenizer.from_pretrained("bert-base-uncased")
def abcd():
with torch.no_grad():
out = model(**tok(token,return_tensors='pt'))[0]
# out = tok(token)
return out
from datasets.fingerprint import Hasher
my_func = abcd
print(Hasher.hash(my_func))
print(abcd())
```
### Expected behavior
I wanna cache all the model output
### Environment info
datasets:2.5.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5161/timeline | null | completed | null | null | false | [
"Addressed in https://github.com/huggingface/datasets/pull/5191 (torch.Tensor objects now produce deterministic hashes)"
] |
https://api.github.com/repos/huggingface/datasets/issues/175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/175/comments | https://api.github.com/repos/huggingface/datasets/issues/175/events | https://github.com/huggingface/datasets/issues/175 | 621,929,428 | MDU6SXNzdWU2MjE5Mjk0Mjg= | 175 | [Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError | [] | closed | false | null | 0 | 2020-05-20T17:00:32Z | 2020-05-20T18:18:50Z | 2020-05-20T18:18:50Z | null | v 0.1.0 from pip
```python
import nlp
xsum = nlp.load_dataset('xsum')
```
Issue is `dl_manager.manual_dir`is `None`
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-8a32f066f3bd> in <module>
----> 1 xsum = nlp.load_dataset('xsum')
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs)
515 download_mode=download_mode,
516 ignore_verifications=ignore_verifications,
--> 517 save_infos=save_infos,
518 )
519
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, dl_manager, **download_and_prepare_kwargs)
361 verify_infos = not save_infos and not ignore_verifications
362 self._download_and_prepare(
--> 363 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
364 )
365 # Sync info
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
397 split_dict = SplitDict(dataset_name=self.name)
398 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs)
--> 399 split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
400 # Checksums verification
401 if verify_infos:
~/miniconda3/envs/nb/lib/python3.7/site-packages/nlp/datasets/xsum/5c5fca23aaaa469b7a1c6f095cf12f90d7ab99bcc0d86f689a74fd62634a1472/xsum.py in _split_generators(self, dl_manager)
102 with open(dl_path, "r") as json_file:
103 split_ids = json.load(json_file)
--> 104 downloaded_path = os.path.join(dl_manager.manual_dir, "xsum-extracts-from-downloads")
105 return [
106 nlp.SplitGenerator(
~/miniconda3/envs/nb/lib/python3.7/posixpath.py in join(a, *p)
78 will be discarded. An empty last part will result in a path that
79 ends with a separator."""
---> 80 a = os.fspath(a)
81 sep = _get_sep(a)
82 path = a
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/175/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/175/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3344/comments | https://api.github.com/repos/huggingface/datasets/issues/3344/events | https://github.com/huggingface/datasets/pull/3344 | 1,067,567,603 | PR_kwDODunzps4vNJwd | 3,344 | Add ArrayXD docs | [] | closed | false | null | 0 | 2021-11-30T18:53:31Z | 2021-12-01T20:16:03Z | 2021-12-01T19:35:32Z | null | Documents support for dynamic first dimension in `ArrayXD` from #2891, and explain the `ArrayXD` feature in general.
Let me know if I'm missing anything @lhoestq :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3344/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3344",
"merged_at": "2021-12-01T19:35:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3344"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/79 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/79/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/79/comments | https://api.github.com/repos/huggingface/datasets/issues/79/events | https://github.com/huggingface/datasets/pull/79 | 616,785,613 | MDExOlB1bGxSZXF1ZXN0NDE2ODI5NzMy | 79 | [Convert] add new pattern | [] | closed | false | null | 0 | 2020-05-12T16:16:51Z | 2020-05-12T16:17:10Z | 2020-05-12T16:17:09Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/79/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/79/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/79.diff",
"html_url": "https://github.com/huggingface/datasets/pull/79",
"merged_at": "2020-05-12T16:17:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/79.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/79"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/5226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5226/comments | https://api.github.com/repos/huggingface/datasets/issues/5226/events | https://github.com/huggingface/datasets/issues/5226 | 1,444,385,148 | I_kwDODunzps5WF5F8 | 5,226 | Q: Memory release when removing the column? | [] | closed | false | null | 3 | 2022-11-10T18:35:27Z | 2022-11-29T15:10:10Z | 2022-11-29T15:10:10Z | null | ### Describe the bug
How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks?
```python
from datasets import load_dataset
common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True)
# check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670
common_voice = common_voice.remove_columns(column_names=common_voice.column_names['train'])
common_voice.clear()
# check memory -> RAM Used (GB): 0.705 / Total (GB) 33.670
```
I tried `gc.collect()` but did not help
### Steps to reproduce the bug
1. load dataset
2. remove all the columns
3. check memory is reduced or not
[link to reproduce](https://www.kaggle.com/code/bayartsogtya/huggingface-dataset-memory-issue/notebook?scriptVersionId=110630567)
### Expected behavior
Memory released when I remove the column
### Environment info
- `datasets` version: 2.1.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 8.0.0
- Pandas version: 1.3.5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5226/timeline | null | completed | null | null | false | [
"Hi ! Datasets are memory mapped from your disk, i.e. they're not loaded in RAM. This is possible thanks to the Arrow data format.\r\n\r\nTherefore the column you remove is not in RAM, so removing it doesn't cause the RAM to decrease.",
"Thanks for the explanation! @lhoestq \r\nI wonder since it is memory mapped,... |
https://api.github.com/repos/huggingface/datasets/issues/738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/738/comments | https://api.github.com/repos/huggingface/datasets/issues/738/events | https://github.com/huggingface/datasets/pull/738 | 723,033,923 | MDExOlB1bGxSZXF1ZXN0NTA0NjkxNjM4 | 738 | Replace seqeval code with original classification_report for simplicity | [] | closed | false | null | 3 | 2020-10-16T08:51:45Z | 2021-01-21T16:07:15Z | 2020-10-19T10:31:12Z | null | Recently, the original seqeval has enabled us to get per type scores and overall scores as a dictionary.
This PR replaces the current code with the original function(`classification_report`) to simplify it.
Also, the original code has been updated to fix #352.
- Related issue: https://github.com/chakki-works/seqeval/pull/38
```python
from datasets import load_metric
metric = load_metric("seqeval")
y_true = [['O', 'O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
y_pred = [['O', 'O', 'B-MISC', 'I-MISC', 'I-MISC', 'I-MISC', 'O'], ['B-PER', 'I-PER', 'O']]
metric.compute(predictions=y_pred, references=y_true)
# Output: {'MISC': {'precision': 0.0, 'recall': 0.0, 'f1': 0, 'number': 1}, 'PER': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 1}, 'overall_precision': 0.5, 'overall_recall': 0.5, 'overall_f1': 0.5, 'overall_accuracy': 0.8}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/738/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/738.diff",
"html_url": "https://github.com/huggingface/datasets/pull/738",
"merged_at": "2020-10-19T10:31:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/738.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/738"
} | true | [
"Hello,\r\n\r\nI ran https://github.com/huggingface/transformers/blob/master/examples/token-classification/run.sh\r\n\r\nAnd received this error:\r\n```\r\n100%|██████████| 407/407 [21:37<00:00, 3.44s/it]Traceback (most recent call last):\r\n File \"run_ner.py\", line 445, in <module>\r\n main()\r\n File \"ru... |
https://api.github.com/repos/huggingface/datasets/issues/2011 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2011/comments | https://api.github.com/repos/huggingface/datasets/issues/2011/events | https://github.com/huggingface/datasets/pull/2011 | 825,621,952 | MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx | 2,011 | Add RoSent Dataset | [] | closed | false | null | 0 | 2021-03-09T09:40:08Z | 2021-03-11T18:00:52Z | 2021-03-11T18:00:52Z | null | This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529.
I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique.
Let me know in case of any issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2011/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2011.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2011",
"merged_at": "2021-03-11T18:00:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2011.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2011"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3599 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3599/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3599/comments | https://api.github.com/repos/huggingface/datasets/issues/3599/events | https://github.com/huggingface/datasets/issues/3599 | 1,108,111,607 | I_kwDODunzps5CDHD3 | 3,599 | The `add_column()` method does not work if used on dataset sliced with `select()` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-01-19T13:36:50Z | 2022-01-28T15:35:57Z | 2022-01-28T15:35:57Z | null | Hello, I posted this as a question on the forums ([here](https://discuss.huggingface.co/t/add-column-does-not-work-if-used-on-dataset-sliced-with-select/13893)):
I have a dataset with 2000 entries
> dataset = Dataset.from_dict({'colA': list(range(2000))})
and from which I want to extract the first one thousand rows, create a new dataset with these and also add a new column to it:
> dataset2 = dataset.select(list(range(1000)))
> final_dataset = dataset2.add_column('colB', list(range(1000)))
This gives an error
>ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000
So it looks like even though it is a dataset with 1000 rows, it "remembers" the shape of the one it was sliced from.
## Actual results
```
ArrowInvalid Traceback (most recent call last)
<ipython-input-138-e806860f3ce3> in <module>
----> 1 final_dataset = dataset2.add_column('colB', list(range(1000)))
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/.local/lib/python3.8/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in add_column(self, name, column, new_fingerprint)
3343 column_table = InMemoryTable.from_pydict({name: column})
3344 # Concatenate tables horizontally
-> 3345 table = ConcatenationTable.from_tables([self._data, column_table], axis=1)
3346 # Update features
3347 info = self.info.copy()
~/.local/lib/python3.8/site-packages/datasets/table.py in from_tables(cls, tables, axis)
729 table_blocks = to_blocks(table)
730 blocks = _extend_blocks(blocks, table_blocks, axis=axis)
--> 731 return cls.from_blocks(blocks)
732
733 @property
~/.local/lib/python3.8/site-packages/datasets/table.py in from_blocks(cls, blocks)
668 @classmethod
669 def from_blocks(cls, blocks: TableBlockContainer) -> "ConcatenationTable":
--> 670 blocks = cls._consolidate_blocks(blocks)
671 if isinstance(blocks, TableBlock):
672 table = blocks
~/.local/lib/python3.8/site-packages/datasets/table.py in _consolidate_blocks(cls, blocks)
664 return cls._merge_blocks(blocks, axis=0)
665 else:
--> 666 return cls._merge_blocks(blocks)
667
668 @classmethod
~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis)
650 merged_blocks += list(block_group)
651 else: # both
--> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks]
653 if all(len(row_block) == 1 for row_block in merged_blocks):
654 merged_blocks = cls._merge_blocks(
~/.local/lib/python3.8/site-packages/datasets/table.py in <listcomp>(.0)
650 merged_blocks += list(block_group)
651 else: # both
--> 652 merged_blocks = [cls._merge_blocks(row_block, axis=1) for row_block in blocks]
653 if all(len(row_block) == 1 for row_block in merged_blocks):
654 merged_blocks = cls._merge_blocks(
~/.local/lib/python3.8/site-packages/datasets/table.py in _merge_blocks(cls, blocks, axis)
647 for is_in_memory, block_group in groupby(blocks, key=lambda x: isinstance(x, InMemoryTable)):
648 if is_in_memory:
--> 649 block_group = [InMemoryTable(cls._concat_blocks(list(block_group), axis=axis))]
650 merged_blocks += list(block_group)
651 else: # both
~/.local/lib/python3.8/site-packages/datasets/table.py in _concat_blocks(blocks, axis)
626 else:
627 for name, col in zip(table.column_names, table.columns):
--> 628 pa_table = pa_table.append_column(name, col)
629 return pa_table
630 else:
~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.append_column()
~/.local/lib/python3.8/site-packages/pyarrow/table.pxi in pyarrow.lib.Table.add_column()
~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.local/lib/python3.8/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Added column's length must match table's length. Expected length 2000 but got length 1000
```
A solution provided by @mariosasko is to use `dataset2.flatten_indices()` after the `select()` and before attempting to add the new column:
> dataset = Dataset.from_dict({'colA': list(range(2000))})
> dataset2 = dataset.select(list(range(1000)))
> dataset2 = dataset2.flatten_indices()
> final_dataset = dataset2.add_column('colB', list(range(1000)))
which works.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.13.2 (note: also checked with version 1.17.0, still the same error)
- Platform: Ubuntu 20.04.3
- Python version: 3.8.10
- PyArrow version: 6.0.0
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3599/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3599/timeline | null | completed | null | null | false | [
"similar #3611 "
] |
https://api.github.com/repos/huggingface/datasets/issues/595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/595/comments | https://api.github.com/repos/huggingface/datasets/issues/595/events | https://github.com/huggingface/datasets/issues/595 | 696,892,304 | MDU6SXNzdWU2OTY4OTIzMDQ= | 595 | `Dataset`/`DatasetDict` has no attribute 'save_to_disk' | [] | closed | false | null | 2 | 2020-09-09T15:01:52Z | 2020-09-09T16:20:19Z | 2020-09-09T16:20:18Z | null | Hi,
As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.py` which is saved after `pip install nlp -U` in my `conda` environment DOES NOT contain the `save_to_disk` method. I even tried `pip install git+https://github.com/huggingface/nlp.git ` and still no luck. Do I need to install the library in another way? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/595/timeline | null | completed | null | null | false | [
"`pip install git+https://github.com/huggingface/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?",
"> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\... |
https://api.github.com/repos/huggingface/datasets/issues/5895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5895/comments | https://api.github.com/repos/huggingface/datasets/issues/5895/events | https://github.com/huggingface/datasets/issues/5895 | 1,725,467,252 | I_kwDODunzps5m2Ip0 | 5,895 | The dir name and split strings are confused when loading ArmelR/stack-exchange-instruction dataset | [] | closed | false | null | 2 | 2023-05-25T09:39:06Z | 2023-05-29T02:32:12Z | 2023-05-29T02:32:12Z | null | ### Describe the bug
When I load the ArmelR/stack-exchange-instruction dataset, I encounter a bug that may be raised by confusing the dir name string and the split string about the dataset.
When I use the script "datasets.load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)", it fails. But it succeeds when I add the "streaming = True" parameter.
The website of the dataset is https://huggingface.co/datasets/ArmelR/stack-exchange-instruction/ .
The traceback logs are as below:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/builder.py", line 1706, in _prepare_split
split_info = self.info.splits[split_generator.name]
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/splits.py", line 530, in __getitem__
instructions = make_file_instructions(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 112, in make_file_instructions
name2filenames = {
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/arrow_reader.py", line 113, in <dictcomp>
info.name: filenames_for_dataset_split(
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 70, in filenames_for_dataset_split
prefix = filename_prefix_for_split(dataset_name, split)
File "/home/xxx/miniconda3/envs/code/lib/python3.9/site-packages/datasets/naming.py", line 54, in filename_prefix_for_split
if os.path.basename(name) != name:
File "/home/xxx/miniconda3/envs/code/lib/python3.9/posixpath.py", line 142, in basename
p = os.fspath(p)
TypeError: expected str, bytes or os.PathLike object, not NoneType
### Steps to reproduce the bug
1. import datasets library function: ```from datasets import load_dataset```
2. load dataset: ```ds=load_dataset('ArmelR/stack-exchange-instruction', data_dir="data/finetune", split="train", use_auth_token=True)```
### Expected behavior
The dataset can be loaded successfully without the streaming setting.
### Environment info
Linux,
python=3.9
datasets=2.12.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5895/timeline | null | completed | null | null | false | [
"Thanks for reporting, @DongHande.\r\n\r\nI think the issue is caused by the metadata in the dataset card: in the header of the `README.md`, they state that the dataset has 4 splits (\"finetune\", \"reward\", \"rl\", \"evaluation\"). \r\n```yaml\r\n splits:\r\n - name: finetune\r\n num_bytes: 6674567576\r\... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.