url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.62B
| node_id
stringlengths 18
32
| number
int64 1
5.62k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 2
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2069
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2069/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2069/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2069/events
|
https://github.com/huggingface/datasets/pull/2069
| 833,768,926
|
MDExOlB1bGxSZXF1ZXN0NTk0NzA5ODYw
| 2,069
|
Add and fix docstring for NamedSplit
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-17T13:19:28
| 2021-03-18T10:27:40
| 2021-03-18T10:27:40
|
MEMBER
| null |
Add and fix docstring for `NamedSplit`, which was missing.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2069/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2069/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch",
"merged_at": "2021-03-18T10:27:40"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2068
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2068/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2068/events
|
https://github.com/huggingface/datasets/issues/2068
| 833,602,832
|
MDU6SXNzdWU4MzM2MDI4MzI=
| 2,068
|
PyTorch not available error on SageMaker GPU docker though it is installed
|
{
"login": "sivakhno",
"id": 1651457,
"node_id": "MDQ6VXNlcjE2NTE0NTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sivakhno",
"html_url": "https://github.com/sivakhno",
"followers_url": "https://api.github.com/users/sivakhno/followers",
"following_url": "https://api.github.com/users/sivakhno/following{/other_user}",
"gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions",
"organizations_url": "https://api.github.com/users/sivakhno/orgs",
"repos_url": "https://api.github.com/users/sivakhno/repos",
"events_url": "https://api.github.com/users/sivakhno/events{/privacy}",
"received_events_url": "https://api.github.com/users/sivakhno/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"cc @philschmid ",
"Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`",
"Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6.0` (docker `763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.6.0-gpu-py3 `), but the error is the same. ",
"Could paste the code you use the start your training job and the fine-tuning script you run? ",
"@sivakhno this should be now fixed in `datasets>=1.5.0`. ",
"@philschmid Recently released tensorflow-macos seems to be missing. ",
"I've created a PR to add this. "
] | 2021-03-17T10:04:27
| 2021-06-14T04:47:30
| 2021-06-14T04:47:30
|
NONE
| null |
I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2068/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2067
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2067/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2067/events
|
https://github.com/huggingface/datasets/issues/2067
| 833,559,940
|
MDU6SXNzdWU4MzM1NTk5NDA=
| 2,067
|
Multiprocessing windows error
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..",
"```\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('glue', 'mrpc', split='train')\r\n\r\n\r\nupdated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n\r\n```",
"\r\n\r\n\r\n\r\n\r\nI was able to copy some of the shell \r\nThis is repeating every half second\r\nWin 10, Anaconda with python 3.8, datasets installed from main branche\r\n```\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n exitcode = _main(fd, parent_sentinel)\r\n raise RuntimeError('''\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n\r\n The \"freeze_support()\" line can be omitted if the program\r\n is not going to be frozen to produce an executable. return _run_module_code(code, init_globals, run_name,\r\n prepare(preparation_data)\r\n\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\test.py\", line 6, in <module>\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n updated_dataset = dataset.map(lambda example: {'sentence1': 'My sentence: ' + example['sentence1']}, num_proc=4)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1370, in map\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n with Pool(num_proc, initargs=(RLock(),), initializer=tqdm.set_lock) as pool:\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 119, in Pool\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n return Pool(processes, initializer, initargs, maxtasksperchild,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 212, in __init__\r\n self._popen = self._Popen(self)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\context.py\", line 327, in _Popen\r\n self._repopulate_pool()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 303, in _repopulate_pool\r\n return Popen(process_obj)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\popen_spawn_win32.py\", line 45, in __init__\r\n return self._repopulate_pool_static(self._ctx, self.Process,\r\n prep_data = spawn.get_preparation_data(process_obj._name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\pool.py\", line 326, in _repopulate_pool_static\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 154, in get_preparation_data\r\n _check_not_importing_main()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 134, in _check_not_importing_main\r\n w.start()\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\process.py\", line 121, in start\r\n raise RuntimeError('''\r\nRuntimeError:\r\n An attempt has been made to start a new process before the\r\n current process has finished its bootstrapping phase.\r\n\r\n This probably means that you are not using fork to start your\r\n child processes and you have forgotten to use the proper idiom\r\n in the main module:\r\n\r\n if __name__ == '__main__':\r\n freeze_support()\r\n ...\r\n```",
"Thanks this is really helpful !\r\nI'll try to reproduce on my side and come back to you",
"if __name__ == '__main__':\r\n\r\n\r\nThis line before calling the map function stops the error but the script still repeats endless",
"Indeed you needed `if __name__ == '__main__'` since accoding to [this stackoverflow post](https://stackoverflow.com/a/18205006):\r\n\r\n> On Windows the subprocesses will import (i.e. execute) the main module at start. You need to insert an if __name__ == '__main__': guard in the main module to avoid creating subprocesses recursively.\r\n\r\nRegarding the hanging issue, can you try to update `dill` and `multiprocess` ?",
"It's already on the newest version",
"```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 791, in move\r\n os.rename(src, real_dst)\r\nFileExistsError: [WinError 183] Eine Datei kann nicht erstellt werden, wenn sie bereits vorhanden ist: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\tmpx9fl_jg8' -> 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<string>\", line 1, in <module>\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 116, in spawn_main\r\n exitcode = _main(fd, parent_sentinel)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 125, in _main\r\n prepare(preparation_data)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 236, in prepare\r\n _fixup_main_from_path(data['init_main_from_path'])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\multiprocess\\spawn.py\", line 287, in _fixup_main_from_path\r\n main_content = runpy.run_path(main_path,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 265, in run_path\r\n return _run_module_code(code, init_globals, run_name,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 97, in _run_module_code\r\n _run_code(code, mod_globals, init_globals,\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"F:\\Codes\\Python Apps\\asr\\cvtrain.py\", line 243, in <module>\r\n common_voice_train = common_voice_train.map(remove_special_characters, remove_columns=[\"sentence\"])\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1339, in map\r\n return self._map_single(\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 203, in wrapper\r\n out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\fingerprint.py\", line 337, in wrapper\r\n out = func(self, *args, **kwargs)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\site-packages\\datasets\\arrow_dataset.py\", line 1646, in _map_single\r\n shutil.move(tmp_file.name, cache_file_name)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 805, in move\r\n copy_function(src, real_dst)\r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 435, in copy2\r\n copyfile(src, dst, follow_symlinks=follow_symlinks)\r\n 0%| | 0/27771 [00:00<?, ?ex/s] \r\n File \"C:\\Users\\flozi\\anaconda3\\envs\\wav2vec\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc, open(dst, 'wb') as fdst:\r\nOSError: [Errno 22] Invalid argument: 'D:\\\\huggingfacecache\\\\common_voice\\\\de\\\\6.1.0\\\\0041e06ab061b91d0a23234a2221e87970a19cf3a81b20901474cffffeb7869f\\\\cache-9b4f203a63742dfc.arrow'\r\n```\r\n\r\nI was adding freeze support before calling the mapping function like this\r\nif __name__ == '__main__':\r\n freeze_support()\r\n dataset.map(....)",
"Usually OSError of an arrow file on windows means that the file is currently opened as a dataset object, so you can't overwrite it until the dataset object falls out of scope.\r\nCan you make sure that there's no dataset object that loaded the `cache-9b4f203a63742dfc.arrow` file ?",
"Now I understand\r\nThe error occures because the script got restarted in another thread, so the object is already loaded.\r\nStill don't have an idea why a new thread starts the whole script again"
] | 2021-03-17T09:12:28
| 2021-08-04T17:59:08
| 2021-08-04T17:59:08
|
CONTRIBUTOR
| null |
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2067/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2066
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2066/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2066/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2066/events
|
https://github.com/huggingface/datasets/pull/2066
| 833,480,551
|
MDExOlB1bGxSZXF1ZXN0NTk0NDcwMjEz
| 2,066
|
Fix docstring rendering of Dataset/DatasetDict.from_csv args
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-17T07:23:10
| 2021-03-17T09:21:21
| 2021-03-17T09:21:21
|
MEMBER
| null |
Fix the docstring rendering of Dataset/DatasetDict.from_csv args.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2066/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2066/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2066",
"html_url": "https://github.com/huggingface/datasets/pull/2066",
"diff_url": "https://github.com/huggingface/datasets/pull/2066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2066.patch",
"merged_at": "2021-03-17T09:21:21"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2065
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2065/events
|
https://github.com/huggingface/datasets/issues/2065
| 833,291,432
|
MDU6SXNzdWU4MzMyOTE0MzI=
| 2,065
|
Only user permission of saved cache files, not group
|
{
"login": "lorr1",
"id": 57237365,
"node_id": "MDQ6VXNlcjU3MjM3MzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorr1",
"html_url": "https://github.com/lorr1",
"followers_url": "https://api.github.com/users/lorr1/followers",
"following_url": "https://api.github.com/users/lorr1/following{/other_user}",
"gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorr1/subscriptions",
"organizations_url": "https://api.github.com/users/lorr1/orgs",
"repos_url": "https://api.github.com/users/lorr1/repos",
"events_url": "https://api.github.com/users/lorr1/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorr1/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646))\r\n\r\nThat means it keeps the permissions specified by the `tempfile.NamedTemporaryFile` object, i.e. `-rw-------` instead of `-rw-r--r--`. Improving this could be a nice first contribution to the library :)",
"Hi @lhoestq,\r\nI looked into this and yes you're right. The `NamedTemporaryFile` is always created with mode 0600, which prevents group from reading the file. Should we change the permissions of `tmp_file.name` [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1871) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1590), post creation to 0644 inorder for group and others to read it?",
"Good idea :) we could even update the permissions after the file has been moved by shutil.move [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1899) and [here](https://github.com/huggingface/datasets/blob/f6b8251eb975f66a568356d2a40d86442c03beb9/src/datasets/arrow_dataset.py#L1646) actually.\r\nApparently they set the default 0600 for temporary files for security reasons, so let's update the umask only after the file has been moved",
"Would it be possible to actually set the umask based on a user provided argument? For example, a popular usecase my team has is using a shared file-system for processing datasets. This may involve writing/deleting other files, or changing filenames, which a -rw-r--r-- wouldn't fix. ",
"Note that you can get the cache files of a dataset with the `cache_files` attributes.\r\nThen you can `chmod` those files and all the other cache files in the same directory.\r\n\r\nMoreover we can probably keep the same permissions after each transform. This way you just need to set the permissions once after doing `load_dataset` for example, and then all the new transformed cached files will have the same permissions.\r\nWhat do you think ?",
"This means we'll check the permissions of other `cache_files` already created for a dataset before setting permissions for new `cache_files`?",
"You can just check the permission of `dataset.cache_files[0]` imo",
"> This way you just need to set the permissions once after doing load_dataset for example, and then all the new transformed cached files will have the same permissions.\r\n\r\nI was referring to this. Ensuring that newly generated `cache_files` have the same permissions",
"Yes exactly\r\n\r\nI imagine users can first do `load_dataset`, then chmod on the arrow files. After that all the new cache files could have the same permissions as the first arrow files. Opinions on this ?",
"Sounds nice but I feel this is a sub-part of the approach mentioned by @siddk. Instead of letting the user set new permissions by itself first and then making sure newly generated files have same permissions why don't we ask the user initially only what they want? What are your thoughts?",
"Yes sounds good. Should this be a parameter in `load_dataset` ? Or an env variable ? Or use the value of `os.umask` ?",
"Ideally it should be a parameter in `load_dataset` but I'm not sure how important it is for the users (considering only important things should go into `load_dataset` parameters)",
"I think it's fairly important; for context, our team uses a shared file-system where many folks run experiments based on datasets that are cached by other users.\r\n\r\nFor example, I might start a training run, downloading a dataset. Then, a couple of days later, a collaborator using the same repository might want to use the same dataset on the same shared filesystem, but won't be able to under the default permissions.\r\n\r\nBeing able to specify directly in the top-level `load_dataset()` call seems important, but an equally valid option would be to just inherit from the running user's `umask` (this should probably be the default anyway).\r\n\r\nSo basically, argument that takes a custom set of permissions, and by default, use the running user's umask!",
"Maybe let's start by defaulting to the user's umask !\r\nDo you want to give it a try @bhavitvyamalik ?",
"Yeah sure! Instead of using default `0o644` should I first extract umask of current user and then use `os.umask` on it? We can do it inside `Dataset` class so that all folders/files created during the call use running user's umask\r\n\r\n",
"You can get the umask using `os.umask` and then I guess you can just use `os.chmod` as in your previous PR, but with the right permissions depending on the umask.",
"FWIW, we have this issue with other caches - e.g. `transformers` model files. So probably will need to backport this into `transformers` as well.\r\n\r\nthanks @thomwolf for the pointer.",
"Hi @stas00,\r\nFor this should we use the same umask code in the respective model directory inside `TRANSFORMERS_CACHE`?",
"That sounds very right to me, @bhavitvyamalik ",
"The cluster I am working on does not allow me to change the permission of the files with os.chmod. I was wondering if there is any workaround for this? My cache is in a GCP bucket and I can't change file permissions once I mount it.",
"@vmurahari3 what error do you have exactly ?",
"I get a permission denied error on https://github.com/huggingface/datasets/blob/b8363e0539c6f0cb5de49af32962cf2eb4c47395/src/datasets/arrow_dataset.py#L2799. I suspect I don't have permissions to change group permissions. I am mounting a GCP bucket through [gcsfuse](https://github.com/GoogleCloudPlatform/gcsfuse). ",
"What @lhoestq is asking for is the full multi-line traceback - it's almost never enough to show the last line - a full stack is needed to get the context. Thank you!\r\n\r\nI wonder if a workaround is to try/except and then issue a warning if this fails?"
] | 2021-03-17T00:20:22
| 2022-06-28T08:10:10
| 2021-05-10T06:45:29
|
NONE
| null |
Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2064
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2064/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2064/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2064/events
|
https://github.com/huggingface/datasets/pull/2064
| 833,002,360
|
MDExOlB1bGxSZXF1ZXN0NTk0MDczOTQ1
| 2,064
|
Fix ted_talks_iwslt version error
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-16T16:43:45
| 2021-03-16T18:00:08
| 2021-03-16T18:00:08
|
CONTRIBUTOR
| null |
This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2064/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2064/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch",
"merged_at": "2021-03-16T18:00:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2063
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2063/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2063/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2063/events
|
https://github.com/huggingface/datasets/pull/2063
| 832,993,705
|
MDExOlB1bGxSZXF1ZXN0NTk0MDY2NzI5
| 2,063
|
[Common Voice] Adapt dataset script so that no manual data download is actually needed
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-16T16:33:44
| 2021-03-17T09:42:52
| 2021-03-17T09:42:37
|
MEMBER
| null |
This PR changes the dataset script so that no manual data dir is needed anymore.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2063/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2063/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2063",
"html_url": "https://github.com/huggingface/datasets/pull/2063",
"diff_url": "https://github.com/huggingface/datasets/pull/2063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2063.patch",
"merged_at": "2021-03-17T09:42:37"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2062
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2062/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2062/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2062/events
|
https://github.com/huggingface/datasets/pull/2062
| 832,625,483
|
MDExOlB1bGxSZXF1ZXN0NTkzNzUyNTMz
| 2,062
|
docs: fix missing quotation
|
{
"login": "neal2018",
"id": 46561493,
"node_id": "MDQ6VXNlcjQ2NTYxNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/46561493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neal2018",
"html_url": "https://github.com/neal2018",
"followers_url": "https://api.github.com/users/neal2018/followers",
"following_url": "https://api.github.com/users/neal2018/following{/other_user}",
"gists_url": "https://api.github.com/users/neal2018/gists{/gist_id}",
"starred_url": "https://api.github.com/users/neal2018/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/neal2018/subscriptions",
"organizations_url": "https://api.github.com/users/neal2018/orgs",
"repos_url": "https://api.github.com/users/neal2018/repos",
"events_url": "https://api.github.com/users/neal2018/events{/privacy}",
"received_events_url": "https://api.github.com/users/neal2018/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-16T10:07:54
| 2021-03-17T09:21:57
| 2021-03-17T09:21:57
|
CONTRIBUTOR
| null |
The json code misses a quote
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2062/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2062/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2062",
"html_url": "https://github.com/huggingface/datasets/pull/2062",
"diff_url": "https://github.com/huggingface/datasets/pull/2062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2062.patch",
"merged_at": "2021-03-17T09:21:56"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2061
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2061/events
|
https://github.com/huggingface/datasets/issues/2061
| 832,596,228
|
MDU6SXNzdWU4MzI1OTYyMjg=
| 2,061
|
Cannot load udpos subsets from xtreme dataset using load_dataset()
|
{
"login": "adzcodez",
"id": 55791365,
"node_id": "MDQ6VXNlcjU1NzkxMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/adzcodez",
"html_url": "https://github.com/adzcodez",
"followers_url": "https://api.github.com/users/adzcodez/followers",
"following_url": "https://api.github.com/users/adzcodez/following{/other_user}",
"gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions",
"organizations_url": "https://api.github.com/users/adzcodez/orgs",
"repos_url": "https://api.github.com/users/adzcodez/repos",
"events_url": "https://api.github.com/users/adzcodez/events{/privacy}",
"received_events_url": "https://api.github.com/users/adzcodez/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null |
[
"@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.",
"Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n> \r\n> The bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.\r\n\r\nYou're right: \"_\" should be added to the list of labels, and the examples must be sequences of tokens, not singles tokens.\r\n",
"@lhoestq Can you please label this issue with the \"good first issue\" label? I'm not sure I'll find time to fix this.\r\n\r\nTo resolve it, the user should:\r\n1. add `\"_\"` to the list of labels\r\n2. transform the udpos subset to the conll format (I think the preprocessing logic can be borrowed from [the original repo](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204))\r\n3. update the dummy data\r\n4. update the dataset info\r\n5. [optional] add info about the data fields structure of the udpos subset to the dataset readme",
"I tried fixing this issue, but its working fine in the dev version : \"1.6.2.dev0\"\r\n\r\nI think somebody already fixed it. ",
"Hi,\r\n\r\nafter #2326, the lines with pos tags equal to `\"_\"` are filtered out when generating the dataset, so this fixes the KeyError described above. However, the udpos subset should be in the conll format i.e. it should yield sequences of tokens and not single tokens, so it would be great to see this fixed (feel free to borrow the logic from [here](https://github.com/google-research/xtreme/blob/58a76a0d02458c4b3b6a742d3fd4ffaca80ff0de/utils_preprocess.py#L187-L204) if you decide to work on this). ",
"Closed by #2466."
] | 2021-03-16T09:32:13
| 2021-06-18T11:54:11
| 2021-06-18T11:54:10
|
NONE
| null |
Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error.
Reprex is:
`from datasets import load_dataset `
`dataset = load_dataset('xtreme', 'udpos.English')`
The error is:
`KeyError: '_'`
The full traceback is:
KeyError Traceback (most recent call last)
<ipython-input-5-7181359ea09d> in <module>
1 from datasets import load_dataset
----> 2 dataset = load_dataset('xtreme', 'udpos.English')
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
738
739 # Download and prepare data
--> 740 builder_instance.download_and_prepare(
741 download_config=download_config,
742 download_mode=download_mode,
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
576 logger.warning("HF google storage unreachable. Downloading and preparing it from source")
577 if not downloaded_from_gcs:
--> 578 self._download_and_prepare(
579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
580 )
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
654 try:
655 # Prepare split will record examples associated to the split
--> 656 self._prepare_split(split_generator, **prepare_split_kwargs)
657 except OSError as e:
658 raise OSError(
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator)
977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose
978 ):
--> 979 example = self.info.features.encode_example(record)
980 writer.write(example)
981 finally:
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example)
946 def encode_example(self, example):
947 example = cast_to_python_objects(example)
--> 948 return encode_nested_example(self, example)
949
950 def encode_batch(self, batch):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
840 # Nested structures: we allow dict, list/tuples, sequences
841 if isinstance(schema, dict):
--> 842 return {
843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0)
841 if isinstance(schema, dict):
842 return {
--> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj)
844 }
845 elif isinstance(schema, (list, tuple)):
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj)
868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)):
--> 870 return schema.encode_example(obj)
871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation)
872 return obj
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data)
647 # If a string is given, convert to associated integer
648 if isinstance(example_data, str):
--> 649 example_data = self.str2int(example_data)
650
651 # Allowing -1 to mean no label.
~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values)
605 if value not in self._str2int:
606 value = value.strip()
--> 607 output.append(self._str2int[str(value)])
608 else:
609 # No names provided, try to integerize
KeyError: '_'
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2060
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2060/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2060/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2060/events
|
https://github.com/huggingface/datasets/pull/2060
| 832,588,591
|
MDExOlB1bGxSZXF1ZXN0NTkzNzIxNzcx
| 2,060
|
Filtering refactor
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
] | null |
[] | 2021-03-16T09:23:30
| 2021-10-13T09:09:04
| 2021-10-13T09:09:03
|
CONTRIBUTOR
| null |
fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
this branch does it in 233 seconds, master in 1409 seconds.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2060/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2060/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2059
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2059/events
|
https://github.com/huggingface/datasets/issues/2059
| 832,579,156
|
MDU6SXNzdWU4MzI1NzkxNTY=
| 2,059
|
Error while following docs to load the `ted_talks_iwslt` dataset
|
{
"login": "ekdnam",
"id": 40426312,
"node_id": "MDQ6VXNlcjQwNDI2MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ekdnam",
"html_url": "https://github.com/ekdnam",
"followers_url": "https://api.github.com/users/ekdnam/followers",
"following_url": "https://api.github.com/users/ekdnam/following{/other_user}",
"gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions",
"organizations_url": "https://api.github.com/users/ekdnam/orgs",
"repos_url": "https://api.github.com/users/ekdnam/repos",
"events_url": "https://api.github.com/users/ekdnam/events{/privacy}",
"received_events_url": "https://api.github.com/users/ekdnam/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false
| null |
[] | null |
[
"@skyprince999 as you authored the PR for this dataset, any comments?",
"This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)"
] | 2021-03-16T09:12:19
| 2021-03-16T18:00:31
| 2021-03-16T18:00:07
|
NONE
| null |
I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error attached below.
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-6-7dcc67154ef9> in <module>()
----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
4 frames
/usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
730 hash=hash,
731 features=features,
--> 732 **config_kwargs,
733 )
734
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs)
927
928 def __init__(self, *args, writer_batch_size=None, **kwargs):
--> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs)
930 # Batch size used by the ArrowWriter
931 # It defines the number of samples that are kept in memory before writing them
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs)
241 name,
242 custom_features=features,
--> 243 **config_kwargs,
244 )
245
/usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs)
337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION:
338 config_kwargs["version"] = self.VERSION
--> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs)
340
341 # otherwise use the config_kwargs to overwrite the attributes
/root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs)
219 description=description,
220 version=datasets.Version("1.1.0", ""),
--> 221 **kwargs,
222 )
223
TypeError: __init__() got multiple values for keyword argument 'version'
```
How to resolve this?
PS: Thanks a lot @huggingface team for creating this great library!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2057
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2057/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2057/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2057/events
|
https://github.com/huggingface/datasets/pull/2057
| 832,120,522
|
MDExOlB1bGxSZXF1ZXN0NTkzMzMzMjM0
| 2,057
|
update link to ZEST dataset
|
{
"login": "matt-peters",
"id": 619844,
"node_id": "MDQ6VXNlcjYxOTg0NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/619844?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/matt-peters",
"html_url": "https://github.com/matt-peters",
"followers_url": "https://api.github.com/users/matt-peters/followers",
"following_url": "https://api.github.com/users/matt-peters/following{/other_user}",
"gists_url": "https://api.github.com/users/matt-peters/gists{/gist_id}",
"starred_url": "https://api.github.com/users/matt-peters/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/matt-peters/subscriptions",
"organizations_url": "https://api.github.com/users/matt-peters/orgs",
"repos_url": "https://api.github.com/users/matt-peters/repos",
"events_url": "https://api.github.com/users/matt-peters/events{/privacy}",
"received_events_url": "https://api.github.com/users/matt-peters/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-15T19:22:57
| 2021-03-16T17:06:28
| 2021-03-16T17:06:28
|
CONTRIBUTOR
| null |
Updating the link as the original one is no longer working.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2057/reactions",
"total_count": 3,
"+1": 3,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2057/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2057",
"html_url": "https://github.com/huggingface/datasets/pull/2057",
"diff_url": "https://github.com/huggingface/datasets/pull/2057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2057.patch",
"merged_at": "2021-03-16T17:06:28"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2056
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2056/events
|
https://github.com/huggingface/datasets/issues/2056
| 831,718,397
|
MDU6SXNzdWU4MzE3MTgzOTc=
| 2,056
|
issue with opus100/en-fr dataset
|
{
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ",
"Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers import MT5TokenizerFast\r\n\r\ndef get_tokenized_dataset(dataset_name, dataset_config_name, tokenizer):\r\n datasets = load_dataset(dataset_name, dataset_config_name, script_version=\"master\")\r\n column_names = datasets[\"train\"].column_names\r\n text_column_name = \"translation\"\r\n def process_dataset(datasets):\r\n def process_function(examples):\r\n lang = \"fr\"\r\n return {\"src_texts\": [example[lang] for example in examples[text_column_name]]}\r\n datasets = datasets.map(\r\n process_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True,\r\n )\r\n return datasets\r\n datasets = process_dataset(datasets)\r\n text_column_name = \"src_texts\"\r\n column_names = [\"src_texts\"]\r\n def tokenize_function(examples):\r\n return tokenizer(examples[text_column_name], return_special_tokens_mask=True)\r\n tokenized_datasets = datasets.map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=None,\r\n remove_columns=column_names,\r\n load_from_cache_file=True\r\n )\r\n\r\nif __name__ == \"__main__\":\r\n tokenizer_kwargs = {\r\n \"cache_dir\": None,\r\n \"use_fast\": True,\r\n \"revision\": \"main\",\r\n \"use_auth_token\": None\r\n }\r\n tokenizer = MT5TokenizerFast.from_pretrained(\"google/mt5-small\", **tokenizer_kwargs)\r\n get_tokenized_dataset(dataset_name=\"opus100\", dataset_config_name=\"en-fr\", tokenizer=tokenizer)\r\n~ \r\n```",
"as per https://github.com/huggingface/tokenizers/issues/626 this looks like to be the tokenizer bug, I therefore, reported it there https://github.com/huggingface/tokenizers/issues/626 and I am closing this one."
] | 2021-03-15T11:32:42
| 2021-03-16T15:49:00
| 2021-03-16T15:48:59
|
NONE
| null |
Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s]
Traceback (most recent call last):
File "run_mlm.py", line 550, in <module>
main()
File "run_mlm.py", line 412, in main
in zip(data_args.dataset_name, data_args.dataset_config_name)]
File "run_mlm.py", line 411, in <listcomp>
logger) for dataset_name, dataset_config_name\
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset
load_from_cache_file=not data_args.overwrite_cache,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp>
for k, dataset in self.items()
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map
update_data=update_data,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single
batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs
function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function
return tokenizer(examples[text_column_name], return_special_tokens_mask=True)
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus
**kwargs,
File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus
is_pretokenized=is_split_into_words,
pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617
`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2055
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2055/events
|
https://github.com/huggingface/datasets/issues/2055
| 831,684,312
|
MDU6SXNzdWU4MzE2ODQzMTI=
| 2,055
|
is there a way to override a dataset object saved with save_to_disk?
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi\r\nYou can rename the arrow file and update the name in `state.json`",
"I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_dataset.map(\r\n partial(self.embed, ctx_encoder=ctx_encoder, ctx_tokenizer=self.context_tokenizer),\r\n batched=True,\r\n batch_size=1,\r\n features=new_features,\r\n cache_file_name=cache_arrow_path,\r\n load_from_cache_file=False\r\n )\r\n```\r\nSo here we set a cache_file_name , after this it uses the same file name when saving again and again. ",
"I'm not sure I understand your issue, can you elaborate ?\r\n\r\n`cache_file_name` is indeed an argument you can set to specify the cache file that will be used for the processed dataset. By default the file is named with something like `cache-<fingerprint>.arrow` where the fingerprint is a hash.",
"Let's say I am updating a set of embedding in a dataset that is around 40GB inside a training loop every 500 steps (Ex: calculating the embeddings in updated ctx_encoder in RAG and saving it to the passage path). So when we use **dataset_object.save_to_disk('passage_path_directory')** it will save the new dataset object every time with a random file name, especially when we do some transformations to dataset objects such as map or shards. This way, we keep collecting unwanted files that will eventually eat up all the disk space. \r\n\r\nBut if we can save the dataset object every time by a single name like **data_shard_1.arrow**, it will automatically remove the previous file and save the new one in the same directory. I found the above-mentioned code snippet useful to complete this task. \r\n\r\nIs this clear?"
] | 2021-03-15T10:50:53
| 2021-03-22T04:06:17
| 2021-03-22T04:06:17
|
NONE
| null |
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2054
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2054/events
|
https://github.com/huggingface/datasets/issues/2054
| 831,597,665
|
MDU6SXNzdWU4MzE1OTc2NjU=
| 2,054
|
Could not find file for ZEST dataset
|
{
"login": "bhadreshpsavani",
"id": 26653468,
"node_id": "MDQ6VXNlcjI2NjUzNDY4",
"avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhadreshpsavani",
"html_url": "https://github.com/bhadreshpsavani",
"followers_url": "https://api.github.com/users/bhadreshpsavani/followers",
"following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}",
"gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions",
"organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs",
"repos_url": "https://api.github.com/users/bhadreshpsavani/repos",
"events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}",
"received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false
| null |
[] | null |
[
"The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.",
"This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)",
"Thanks @lhoestq and @matt-peters ",
"I am closing this issue since its fixed!"
] | 2021-03-15T09:11:58
| 2021-05-03T09:30:24
| 2021-05-03T09:30:24
|
CONTRIBUTOR
| null |
I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2053
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2053/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2053/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2053/events
|
https://github.com/huggingface/datasets/pull/2053
| 831,151,728
|
MDExOlB1bGxSZXF1ZXN0NTkyNTM4ODY2
| 2,053
|
Add bAbI QA tasks
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-14T13:04:39
| 2021-03-29T12:41:48
| 2021-03-29T12:41:48
|
CONTRIBUTOR
| null |
- **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks are designed to be prerequisites for any system that aims to be capable of conversing with a human. The aim is to classify these tasks into skill sets,so that researchers can identify (and then rectify) the failings of their systems.*
- **Paper:** [arXiv](https://arxiv.org/pdf/1502.05698.pdf)
- **Data:** [Facebook Research Page](https://research.fb.com/downloads/babi/)
- **Motivation:** This is a unique dataset with story-based Question Answering. It is a part of the `bAbI` project by Facebook Research.
**Note**: I have currently added all the 160 configs. If this seems impractical, I can keep only a few. While each `dummy_data.zip` weighs a few KBs, overall it is around 1.3MB for all configurations. This is problematic. Let me know what is to be done.
Thanks :)
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2053/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2053/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2053",
"html_url": "https://github.com/huggingface/datasets/pull/2053",
"diff_url": "https://github.com/huggingface/datasets/pull/2053.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2053.patch",
"merged_at": "2021-03-29T12:41:48"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2052
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2052/events
|
https://github.com/huggingface/datasets/issues/2052
| 831,135,704
|
MDU6SXNzdWU4MzExMzU3MDQ=
| 2,052
|
Timit_asr dataset repeats examples
|
{
"login": "fermaat",
"id": 7583522,
"node_id": "MDQ6VXNlcjc1ODM1MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fermaat",
"html_url": "https://github.com/fermaat",
"followers_url": "https://api.github.com/users/fermaat/followers",
"following_url": "https://api.github.com/users/fermaat/following{/other_user}",
"gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fermaat/subscriptions",
"organizations_url": "https://api.github.com/users/fermaat/orgs",
"repos_url": "https://api.github.com/users/fermaat/repos",
"events_url": "https://api.github.com/users/fermaat/events{/privacy}",
"received_events_url": "https://api.github.com/users/fermaat/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```",
"Ty!"
] | 2021-03-14T11:43:43
| 2021-03-15T10:37:16
| 2021-03-15T10:37:16
|
NONE
| null |
Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']
#['Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
# 'Would such an act of refusal be useful?',
```
The same behavior happens for other columns
Expected behavior:
Different info on the actual timit_asr dataset
Actual behavior:
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different
Debug info
Streamlit version: (get it with $ streamlit version)
Python version: Python 3.6.12
Using Conda? PipEnv? PyEnv? Pex? Using pip
OS version: Centos-release-7-9.2009.1.el7.centos.x86_64
Additional information
You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2051
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2051/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2051/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2051/events
|
https://github.com/huggingface/datasets/pull/2051
| 831,027,021
|
MDExOlB1bGxSZXF1ZXN0NTkyNDQ2MDU1
| 2,051
|
Add MDD Dataset
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-14T00:01:05
| 2021-03-19T11:15:44
| 2021-03-19T10:31:59
|
CONTRIBUTOR
| null |
- **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
- **Paper:** [arXiv](https://arxiv.org/pdf/1511.06931.pdf)
- **Data:** https://research.fb.com/downloads/babi/
- **Motivation:** This is one of the popular dialog datasets, a part of Facebook Research's "bAbI project".
### Checkbox
- [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template
- [x] Fill the `_DESCRIPTION` and `_CITATION` variables
- [x] Implement `_infos()`, `_split_generators()` and `_generate_examples()`
- [x] Make sure that the `BUILDER_CONFIGS` class attribute is filled with the different configurations of the dataset and that the `BUILDER_CONFIG_CLASS` is specified if there is a custom config class.
- [x] Generate the metadata file `dataset_infos.json` for all configurations
- [x] Generate the dummy data `dummy_data.zip` files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card `README.md` using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass.
**Note**: I haven't included the following from the data files: `entities` (the file containing list of all entities in the first three subtasks), `dictionary`(the dictionary of words they use in their models), `movie_kb`(contains the knowledge base of information about the movies, actors and other entities that are mentioned in the dialogs). Please let me know if those are needed, and if yes, should I make separate configurations for them?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2051/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2051/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2051",
"html_url": "https://github.com/huggingface/datasets/pull/2051",
"diff_url": "https://github.com/huggingface/datasets/pull/2051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2051.patch",
"merged_at": "2021-03-19T10:31:59"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2050
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2050/events
|
https://github.com/huggingface/datasets/issues/2050
| 831,006,551
|
MDU6SXNzdWU4MzEwMDY1NTE=
| 2,050
|
Build custom dataset to fine-tune Wav2Vec2
|
{
"login": "Omarnabk",
"id": 72882909,
"node_id": "MDQ6VXNlcjcyODgyOTA5",
"avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Omarnabk",
"html_url": "https://github.com/Omarnabk",
"followers_url": "https://api.github.com/users/Omarnabk/followers",
"following_url": "https://api.github.com/users/Omarnabk/following{/other_user}",
"gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions",
"organizations_url": "https://api.github.com/users/Omarnabk/orgs",
"repos_url": "https://api.github.com/users/Omarnabk/repos",
"events_url": "https://api.github.com/users/Omarnabk/events{/privacy}",
"received_events_url": "https://api.github.com/users/Omarnabk/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] | null |
[
"@lhoestq - We could simply use the \"general\" json dataset for this no? ",
"Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\ntest_dataset = load_dataset(\"json\", data_files=data_files, split=\"test\")\r\n```\r\n\r\nYou just need to make sure that the data contain the paths to the audio files.\r\nIf not, feel free to use `.map()` to add them.",
"Many thanks! that was what I was looking for. "
] | 2021-03-13T22:01:10
| 2021-03-15T09:27:28
| 2021-03-15T09:27:28
|
NONE
| null |
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2049
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2049/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2049/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2049/events
|
https://github.com/huggingface/datasets/pull/2049
| 830,978,687
|
MDExOlB1bGxSZXF1ZXN0NTkyNDE2MzQ0
| 2,049
|
Fix text-classification tags
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-13T19:51:42
| 2021-03-16T15:47:46
| 2021-03-16T15:47:46
|
CONTRIBUTOR
| null |
There are different tags for text classification right now: `text-classification` and `text_classification`:
.
This PR fixes it.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2049/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2049/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2049",
"html_url": "https://github.com/huggingface/datasets/pull/2049",
"diff_url": "https://github.com/huggingface/datasets/pull/2049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2049.patch",
"merged_at": "2021-03-16T15:47:46"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2048
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2048/events
|
https://github.com/huggingface/datasets/issues/2048
| 830,953,431
|
MDU6SXNzdWU4MzA5NTM0MzE=
| 2,048
|
github is not always available - probably need a back up
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-13T18:03:32
| 2022-04-01T15:27:10
| 2022-04-01T15:27:10
|
MEMBER
| null |
Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected.
HTTP request sent, awaiting response... 500 Internal Server Error
2021-03-12 18:36:11 ERROR 500: Internal Server Error.
```
Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2047
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2047/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2047/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2047/events
|
https://github.com/huggingface/datasets/pull/2047
| 830,626,430
|
MDExOlB1bGxSZXF1ZXN0NTkyMTI2NzQ3
| 2,047
|
Multilingual dIalogAct benchMark (miam)
|
{
"login": "eusip",
"id": 1551356,
"node_id": "MDQ6VXNlcjE1NTEzNTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1551356?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eusip",
"html_url": "https://github.com/eusip",
"followers_url": "https://api.github.com/users/eusip/followers",
"following_url": "https://api.github.com/users/eusip/following{/other_user}",
"gists_url": "https://api.github.com/users/eusip/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eusip/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eusip/subscriptions",
"organizations_url": "https://api.github.com/users/eusip/orgs",
"repos_url": "https://api.github.com/users/eusip/repos",
"events_url": "https://api.github.com/users/eusip/events{/privacy}",
"received_events_url": "https://api.github.com/users/eusip/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T23:02:55
| 2021-03-23T10:36:34
| 2021-03-19T10:47:13
|
CONTRIBUTOR
| null |
My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2047/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2047/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2047",
"html_url": "https://github.com/huggingface/datasets/pull/2047",
"diff_url": "https://github.com/huggingface/datasets/pull/2047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2047.patch",
"merged_at": "2021-03-19T10:47:13"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2046
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2046/events
|
https://github.com/huggingface/datasets/issues/2046
| 830,423,033
|
MDU6SXNzdWU4MzA0MjMwMzM=
| 2,046
|
add_faisis_index gets very slow when doing it interatively
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?",
"Hi,\r\n I am running the add_faiss_index during the training process of the RAG from the master process (rank 0). But at the exact moment, I do not run any other process since I do it in every 5000 training steps. \r\n \r\n I think what you say is correct. It depends on the number of CPU cores. I did an experiment to compare the time taken to finish the add_faiss_index process on use_own_knowleldge_dataset.py vs the training loop thing. The training loop thing takes 40 mins more. It might be natural right? \r\n \r\n \r\n at the moment it uses around 40 cores of a 96 core machine (I am fine-tuning the entire process). ",
"Can you try to set the number of threads manually ?\r\nIf you set the same number of threads for both the `use_own_knowledge_dataset.py` and RAG training, it should take the same amount of time.\r\nYou can see how to set the number of thread in the faiss wiki: https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls",
"Ok, I will report the details too soon. I am the first one on the list and currently add_index being computed for the 3rd time in the loop. Actually seems like the time is taken to complete each interaction is the same, but around 1 hour more compared to running it without the training loop. A the moment this takes 5hrs and 30 mins. If there is any way to faster the process, an end-to-end rag will be perfect. So I will also try out with different thread numbers too. \r\n\r\n\r\n",
"@lhoestq on a different note, I read about using Faiss-GPU, but the documentation says we should use it when the dataset has the ability to fit into the GPU memory. Although this might work, in the long-term this is not that practical for me.\r\n\r\nhttps://github.com/matsui528/faiss_tips",
"@lhoestq \r\n\r\nHi, I executed the **use_own_dataset.py** script independently and ask a few of my friends to run their programs in the HPC machine at the same time. \r\n\r\n Once there are so many other processes are running the add_index function gets slows down naturally. So basically the speed of the add_index depends entirely on the number of CPU processes. Then I set the number of threads as you have mentioned and got actually the same time for RAG training and independat running. So you are correct! :) \r\n\r\n \r\n Then I added this [issue in Faiss repostiary](https://github.com/facebookresearch/faiss/issues/1767). I got an answer saying our current **IndexHNSWFlat** can get slow for 30 million vectors and it would be better to use alternatives. What do you think?",
"It's a matter of tradeoffs.\r\nHSNW is fast at query time but takes some time to build.\r\nA flat index is flat to build but is \"slow\" at query time.\r\nAn IVF index is probably a good choice for you: fast building and fast queries (but still slower queries than HSNW).\r\n\r\nNote that for an IVF index you would need to have an `nprobe` parameter (number of cells to visit for one query, there are `nlist` in total) that is not too small in order to have good retrieval accuracy, but not too big otherwise the queries will take too much time. From the faiss documentation:\r\n> The nprobe parameter is always a way of adjusting the tradeoff between speed and accuracy of the result. Setting nprobe = nlist gives the same result as the brute-force search (but slower).\r\n\r\nFrom my experience with indexes on DPR embeddings, setting nprobe around 1/4 of nlist gives really good retrieval accuracy and there's no need to have a value higher than that (or you would need to brute-force in order to see a difference).",
"@lhoestq \r\n\r\nThanks a lot for sharing all this prior knowledge. \r\n\r\nJust asking what would be a good nlist of parameters for 30 million embeddings?",
"When IVF is used alone, nlist should be between `4*sqrt(n)` and `16*sqrt(n)`.\r\nFor more details take a look at [this section of the Faiss wiki](https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index#how-big-is-the-dataset)",
"Thanks a lot. I was lost with calling the index from class and using faiss_index_factory. ",
"@lhoestq Thanks a lot for the help you have given to solve this issue. As per my experiments, IVF index suits well for my case and it is a lot faster. The use of this can make the entire RAG end-to-end trainable lot faster. So I will close this issue. Will do the final PR soon. "
] | 2021-03-12T20:27:18
| 2021-03-24T22:29:11
| 2021-03-24T22:29:11
|
NONE
| null |
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster?
@lhoestq
```
def training_step(self, batch, batch_idx) -> Dict:
if (not batch_idx==0) and (batch_idx%5==0):
print("******************************************************")
ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder
model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU
model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff
list_of_gpus = ['cuda:2','cuda:3']
c_dir='/custom/cache/dir'
kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir)
print(kb_dataset)
n=len(list_of_gpus) #nunber of dedicated GPUs
kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)]
#kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir')
print(self.trainer.global_rank)
dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank])
output = [None for _ in list_of_gpus]
#self.trainer.accelerator_connector.accelerator.barrier("embedding_process")
dist.all_gather_object(output, dataset_shards)
#This creation and re-initlaization of the new index
if (self.trainer.global_rank==0): #saving will be done in the main process
combined_dataset = concatenate_datasets(output)
passages_path =self.config.passages_path
logger.info("saving the dataset with ")
#combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage')
combined_dataset.save_to_disk(passages_path)
logger.info("Add faiss index to the dataset that consist of embeddings")
embedding_dataset=combined_dataset
index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT)
embedding_dataset.add_faiss_index("embeddings", custom_index=index)
embedding_dataset.get_index("embeddings").save(self.config.index_path)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2045
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2045/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2045/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2045/events
|
https://github.com/huggingface/datasets/pull/2045
| 830,351,527
|
MDExOlB1bGxSZXF1ZXN0NTkxODc2Mjcz
| 2,045
|
Preserve column ordering in Dataset.rename_column
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T18:26:47
| 2021-03-16T14:48:05
| 2021-03-16T14:35:05
|
CONTRIBUTOR
| null |
Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', 'text')
Dataset({
features: ['label', 'text'],
num_rows: 2
})
```
This PR fixes this.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2045/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2045/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2045",
"html_url": "https://github.com/huggingface/datasets/pull/2045",
"diff_url": "https://github.com/huggingface/datasets/pull/2045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2045.patch",
"merged_at": "2021-03-16T14:35:05"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2044
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2044/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2044/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2044/events
|
https://github.com/huggingface/datasets/pull/2044
| 830,339,905
|
MDExOlB1bGxSZXF1ZXN0NTkxODY2NzM1
| 2,044
|
Add CBT dataset
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T18:04:19
| 2021-03-19T11:10:13
| 2021-03-19T10:29:15
|
CONTRIBUTOR
| null |
This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301).
Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags.
The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines, because they are entire books and would take up a lot of space.
Let me know in case of any issues.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2044/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2044/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2044",
"html_url": "https://github.com/huggingface/datasets/pull/2044",
"diff_url": "https://github.com/huggingface/datasets/pull/2044.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2044.patch",
"merged_at": "2021-03-19T10:29:15"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2043
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2043/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2043/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2043/events
|
https://github.com/huggingface/datasets/pull/2043
| 830,279,098
|
MDExOlB1bGxSZXF1ZXN0NTkxODE1ODAz
| 2,043
|
Support pickle protocol for dataset splits defined as ReadInstruction
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T16:35:11
| 2021-03-16T14:25:38
| 2021-03-16T14:05:05
|
CONTRIBUTOR
| null |
Fixes #2022 (+ some style fixes)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2043/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2043/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2043",
"html_url": "https://github.com/huggingface/datasets/pull/2043",
"diff_url": "https://github.com/huggingface/datasets/pull/2043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2043.patch",
"merged_at": "2021-03-16T14:05:05"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2042
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2042/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2042/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2042/events
|
https://github.com/huggingface/datasets/pull/2042
| 830,190,276
|
MDExOlB1bGxSZXF1ZXN0NTkxNzQwNzQ3
| 2,042
|
Fix arrow memory checks issue in tests
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T14:49:52
| 2021-03-12T15:04:23
| 2021-03-12T15:04:22
|
MEMBER
| null |
The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing their memory until they do and cause the memory verifications to fail in other tests.
Collecting the garbage collector before checking the arrow memory usage seems to fix this issue.
I added a context manager `assert_arrow_memory_increases` that we can use in tests and that deals with the gc.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2042/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2042/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2042",
"html_url": "https://github.com/huggingface/datasets/pull/2042",
"diff_url": "https://github.com/huggingface/datasets/pull/2042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2042.patch",
"merged_at": "2021-03-12T15:04:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2041
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2041/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2041/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2041/events
|
https://github.com/huggingface/datasets/pull/2041
| 830,180,803
|
MDExOlB1bGxSZXF1ZXN0NTkxNzMyNzMw
| 2,041
|
Doc2dial update data_infos and data_loaders
|
{
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T14:39:29
| 2021-03-16T11:09:20
| 2021-03-16T11:09:20
|
CONTRIBUTOR
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2041/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2041/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2041",
"html_url": "https://github.com/huggingface/datasets/pull/2041",
"diff_url": "https://github.com/huggingface/datasets/pull/2041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2041.patch",
"merged_at": "2021-03-16T11:09:20"
}
| true
|
|
https://api.github.com/repos/huggingface/datasets/issues/2040
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2040/events
|
https://github.com/huggingface/datasets/issues/2040
| 830,169,387
|
MDU6SXNzdWU4MzAxNjkzODc=
| 2,040
|
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
|
{
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no longer have such restrictions.",
"Sure, thanks for the fast reply!\r\n\r\nFor dataset A: `[{'filename': 'drive/MyDrive/data_target_task/dataset_a/train/cache-4797266bf4db1eb7.arrow'}]`\r\nFor dataset B: `[]`\r\n\r\nNo clue why for B it returns nothing. `PATH_DATA_CLS_B` is exactly the same in `save_to_disk` and `load_from_disk`... Also I can verify that the folder physically exists under 'drive/MyDrive/data_target_task/dataset_b/'",
"In the next release you'll be able to concatenate any kinds of dataset (either from memory or from disk).\r\n\r\nFor now I'd suggest you to flatten the indices of the A and B datasets. This will remove the indices mapping and you will be able to concatenate them. You can flatten the indices with\r\n```python\r\ndataset = dataset.flatten_indices()\r\n```",
"Indeed this works. Not the most elegant solution, but it does the trick. Thanks a lot! "
] | 2021-03-12T14:27:00
| 2021-08-04T18:00:43
| 2021-08-04T18:00:43
|
NONE
| null |
Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2039
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2039/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2039/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2039/events
|
https://github.com/huggingface/datasets/pull/2039
| 830,047,652
|
MDExOlB1bGxSZXF1ZXN0NTkxNjE3ODY3
| 2,039
|
Doc2dial rc
|
{
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T11:56:28
| 2021-03-12T15:32:36
| 2021-03-12T15:32:36
|
CONTRIBUTOR
| null |
Added fix to handle the last turn that is a user turn.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2039/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2039/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2038
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2038/events
|
https://github.com/huggingface/datasets/issues/2038
| 830,036,875
|
MDU6SXNzdWU4MzAwMzY4NzU=
| 2,038
|
outdated dataset_infos.json might fail verifications
|
{
"login": "songfeng",
"id": 2062185,
"node_id": "MDQ6VXNlcjIwNjIxODU=",
"avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/songfeng",
"html_url": "https://github.com/songfeng",
"followers_url": "https://api.github.com/users/songfeng/followers",
"following_url": "https://api.github.com/users/songfeng/following{/other_user}",
"gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/songfeng/subscriptions",
"organizations_url": "https://api.github.com/users/songfeng/orgs",
"repos_url": "https://api.github.com/users/songfeng/repos",
"events_url": "https://api.github.com/users/songfeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/songfeng/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] | 2021-03-12T11:41:54
| 2021-03-16T16:27:40
| 2021-03-16T16:27:40
|
CONTRIBUTOR
| null |
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2037
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2037/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2037/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2037/events
|
https://github.com/huggingface/datasets/pull/2037
| 829,919,685
|
MDExOlB1bGxSZXF1ZXN0NTkxNTA4MTQz
| 2,037
|
Fix: Wikipedia - save memory by replacing root.clear with elem.clear
|
{
"login": "miyamonz",
"id": 6331508,
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyamonz",
"html_url": "https://github.com/miyamonz",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-12T09:22:00
| 2021-03-23T06:08:16
| 2021-03-16T11:01:22
|
CONTRIBUTOR
| null |
see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related on my modification, isn't it?
```
==================================================================================== short test summary info ====================================================================================
FAILED tests/test_arrow_writer.py::TypedSequenceTest::test_catch_overflow - AssertionError: OverflowError not raised
============================================================= 1 failed, 2332 passed, 5138 skipped, 70 warnings in 91.75s (0:01:31) ==============================================================
make: *** [Makefile:19: test] Error 1
```
Is there anything else I should do?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2037/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2037/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2037",
"html_url": "https://github.com/huggingface/datasets/pull/2037",
"diff_url": "https://github.com/huggingface/datasets/pull/2037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2037.patch",
"merged_at": "2021-03-16T11:01:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2036
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2036/events
|
https://github.com/huggingface/datasets/issues/2036
| 829,909,258
|
MDU6SXNzdWU4Mjk5MDkyNTg=
| 2,036
|
Cannot load wikitext
|
{
"login": "Gpwner",
"id": 19349207,
"node_id": "MDQ6VXNlcjE5MzQ5MjA3",
"avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Gpwner",
"html_url": "https://github.com/Gpwner",
"followers_url": "https://api.github.com/users/Gpwner/followers",
"following_url": "https://api.github.com/users/Gpwner/following{/other_user}",
"gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions",
"organizations_url": "https://api.github.com/users/Gpwner/orgs",
"repos_url": "https://api.github.com/users/Gpwner/repos",
"events_url": "https://api.github.com/users/Gpwner/events{/privacy}",
"received_events_url": "https://api.github.com/users/Gpwner/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Solved!"
] | 2021-03-12T09:09:39
| 2021-03-15T08:45:02
| 2021-03-15T08:44:44
|
NONE
| null |
when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache
raise ConnectionError("Couldn't reach {}".format(url))
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2034
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2034/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2034/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2034/events
|
https://github.com/huggingface/datasets/pull/2034
| 829,381,388
|
MDExOlB1bGxSZXF1ZXN0NTkxMDU2MTEw
| 2,034
|
Fix typo
|
{
"login": "pcyin",
"id": 3413464,
"node_id": "MDQ6VXNlcjM0MTM0NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/3413464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pcyin",
"html_url": "https://github.com/pcyin",
"followers_url": "https://api.github.com/users/pcyin/followers",
"following_url": "https://api.github.com/users/pcyin/following{/other_user}",
"gists_url": "https://api.github.com/users/pcyin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pcyin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pcyin/subscriptions",
"organizations_url": "https://api.github.com/users/pcyin/orgs",
"repos_url": "https://api.github.com/users/pcyin/repos",
"events_url": "https://api.github.com/users/pcyin/events{/privacy}",
"received_events_url": "https://api.github.com/users/pcyin/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-11T17:46:13
| 2021-03-11T18:06:25
| 2021-03-11T18:06:25
|
CONTRIBUTOR
| null |
Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME `
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2034/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2034/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2034",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"merged_at": "2021-03-11T18:06:25"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2033
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2033/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2033/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2033/events
|
https://github.com/huggingface/datasets/pull/2033
| 829,295,339
|
MDExOlB1bGxSZXF1ZXN0NTkwOTgzMDAy
| 2,033
|
Raise an error for outdated sacrebleu versions
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-11T16:08:00
| 2021-03-11T17:58:12
| 2021-03-11T17:58:12
|
MEMBER
| null |
The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12
For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py):
```python
def _compute(
self,
predictions,
references,
smooth_method="exp",
smooth_value=None,
force=False,
lowercase=False,
tokenize=scb.DEFAULT_TOKENIZER,
use_effective_order=False,
):
references_per_prediction = len(references[0])
if any(len(refs) != references_per_prediction for refs in references):
raise ValueError("Sacrebleu requires the same number of references for each prediction")
transformed_references = [[refs[i] for refs in references] for i in range(references_per_prediction)]
> output = scb.corpus_bleu(
sys_stream=predictions,
ref_streams=transformed_references,
smooth_method=smooth_method,
smooth_value=smooth_value,
force=force,
lowercase=lowercase,
tokenize=tokenize,
use_effective_order=use_effective_order,
)
E TypeError: corpus_bleu() got an unexpected keyword argument 'smooth_method'
/mnt/cache/modules/datasets_modules/metrics/sacrebleu/b390045b3d1dd4abf6a95c4a2a11ee3bcc2b7620b076204d0ddc353fa649fd86/sacrebleu.py:114: TypeError
```
I improved the error message when users have an outdated version of sacrebleu.
The new error message tells the user to update sacrebleu.
cc @LysandreJik
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2033/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2033/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2033",
"html_url": "https://github.com/huggingface/datasets/pull/2033",
"diff_url": "https://github.com/huggingface/datasets/pull/2033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2033.patch",
"merged_at": "2021-03-11T17:58:12"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2031
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2031/events
|
https://github.com/huggingface/datasets/issues/2031
| 829,122,778
|
MDU6SXNzdWU4MjkxMjI3Nzg=
| 2,031
|
wikipedia.py generator that extracts XML doesn't release memory
|
{
"login": "miyamonz",
"id": 6331508,
"node_id": "MDQ6VXNlcjYzMzE1MDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/miyamonz",
"html_url": "https://github.com/miyamonz",
"followers_url": "https://api.github.com/users/miyamonz/followers",
"following_url": "https://api.github.com/users/miyamonz/following{/other_user}",
"gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}",
"starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions",
"organizations_url": "https://api.github.com/users/miyamonz/orgs",
"repos_url": "https://api.github.com/users/miyamonz/repos",
"events_url": "https://api.github.com/users/miyamonz/events{/privacy}",
"received_events_url": "https://api.github.com/users/miyamonz/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?",
"OK! I'll send it later."
] | 2021-03-11T12:51:24
| 2021-03-22T08:33:52
| 2021-03-22T08:33:52
|
CONTRIBUTOR
| null |
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502
`root.clear()` intend to clear memory, but it doesn't.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494
I replaced them with `elem.clear()`, then it seems to work correctly.
here is the notebook to reproduce it.
https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2030
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2030/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2030/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2030/events
|
https://github.com/huggingface/datasets/pull/2030
| 829,110,803
|
MDExOlB1bGxSZXF1ZXN0NTkwODI4NzQ4
| 2,030
|
Implement Dataset from text
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-11T12:34:50
| 2021-03-18T13:29:29
| 2021-03-18T13:29:29
|
MEMBER
| null |
Implement `Dataset.from_text`.
Analogue to #1943, #1946.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2030/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2030/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2030",
"html_url": "https://github.com/huggingface/datasets/pull/2030",
"diff_url": "https://github.com/huggingface/datasets/pull/2030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2030.patch",
"merged_at": "2021-03-18T13:29:29"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2029
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2029/events
|
https://github.com/huggingface/datasets/issues/2029
| 829,097,290
|
MDU6SXNzdWU4MjkwOTcyOTA=
| 2,029
|
Loading a faiss index KeyError
|
{
"login": "nbroad1881",
"id": 24982805,
"node_id": "MDQ6VXNlcjI0OTgyODA1",
"avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nbroad1881",
"html_url": "https://github.com/nbroad1881",
"followers_url": "https://api.github.com/users/nbroad1881/followers",
"following_url": "https://api.github.com/users/nbroad1881/following{/other_user}",
"gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}",
"starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions",
"organizations_url": "https://api.github.com/users/nbroad1881/orgs",
"repos_url": "https://api.github.com/users/nbroad1881/repos",
"events_url": "https://api.github.com/users/nbroad1881/events{/privacy}",
"received_events_url": "https://api.github.com/users/nbroad1881/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null |
[
"In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r\n```python\r\ndataset2 = load_from_disk(dataset_filename)\r\n```\r\nwhere `dataset_filename` is the place where you saved you dataset with the embeddings in the first place.",
"Ok in that case HF should fix their misleading example at https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index \r\n\r\nI copy-pasted it here.\r\n\r\n> When you are done with your queries you can save your index on disk:\r\n> \r\n> ```python\r\n> ds_with_embeddings.save_faiss_index('embeddings', 'my_index.faiss')\r\n> ```\r\n> Then reload it later:\r\n> \r\n> ```python\r\n> ds = load_dataset('crime_and_punish', split='train[:100]')\r\n> ds.load_faiss_index('embeddings', 'my_index.faiss')\r\n> ```",
"Hi !\r\n\r\nThe code of the example is valid.\r\nAn index is a search engine, it's not considered a column of a dataset.\r\nWhen you do `ds.load_faiss_index(\"embeddings\", 'my_index.faiss')`, it attaches an index named \"embeddings\" to the dataset but it doesn't re-add the \"embeddings\" column. You can list the indexes of a dataset by using `ds.list_indexes()`.\r\n\r\nIf I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nThis looks misleading indeed, and we should add a note to make it more explicit that it doesn't store the column that was used to build the index.\r\n\r\nFeel free to open a PR to suggest an improvement on the documentation if you want to contribute :)",
"> If I understand correctly by reading this example you thought that it was re-adding the \"embeddings\" column.\r\nYes. I was trying to use the dataset in RAG and it complained that the dataset didn't have the right columns. No problems when loading the dataset with `load_from_disk` and then doing `load_faiss_index`\r\n\r\nWhat I learned was\r\n1. column and index are different\r\n2. loading the index does not create a column\r\n3. the column is not needed to be able to use the index\r\n4. RAG needs both the embeddings column and the index\r\n\r\nIf I can come up with a way to articulate this in the right spot in the docs, I'll open a PR"
] | 2021-03-11T12:16:13
| 2021-03-12T00:21:09
| 2021-03-12T00:21:09
|
NONE
| null |
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (dataset2) with the same text and label information as dataset1
6. Try to load the faiss index from file to dataset2
7. Get `KeyError: "Column embeddings not in the dataset"`
I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.
https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing
Ubuntu Version
VERSION="18.04.5 LTS (Bionic Beaver)"
datasets==1.4.1
faiss==1.5.3
faiss-gpu==1.7.0
torch==1.8.0+cu101
transformers==4.3.3
NVIDIA-SMI 460.56
Driver Version: 460.32.03
CUDA Version: 11.2
Tesla K80
I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I included the exact code from the documentation at the end of the notebook to show that they don't work either.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2028
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2028/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2028/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2028/events
|
https://github.com/huggingface/datasets/pull/2028
| 828,721,393
|
MDExOlB1bGxSZXF1ZXN0NTkwNDk1NzEx
| 2,028
|
Adding PersiNLU reading-comprehension
|
{
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danyaljj/followers",
"following_url": "https://api.github.com/users/danyaljj/following{/other_user}",
"gists_url": "https://api.github.com/users/danyaljj/gists{/gist_id}",
"starred_url": "https://api.github.com/users/danyaljj/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danyaljj/subscriptions",
"organizations_url": "https://api.github.com/users/danyaljj/orgs",
"repos_url": "https://api.github.com/users/danyaljj/repos",
"events_url": "https://api.github.com/users/danyaljj/events{/privacy}",
"received_events_url": "https://api.github.com/users/danyaljj/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-11T04:41:13
| 2021-03-15T09:39:57
| 2021-03-15T09:39:57
|
CONTRIBUTOR
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2028/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2028/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2028",
"html_url": "https://github.com/huggingface/datasets/pull/2028",
"diff_url": "https://github.com/huggingface/datasets/pull/2028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2028.patch",
"merged_at": "2021-03-15T09:39:57"
}
| true
|
|
https://api.github.com/repos/huggingface/datasets/issues/2027
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2027/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2027/events
|
https://github.com/huggingface/datasets/pull/2027
| 828,490,444
|
MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1
| 2,027
|
Update format columns in Dataset.rename_columns
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-10T23:50:59
| 2021-03-11T14:38:40
| 2021-03-11T14:38:40
|
CONTRIBUTOR
| null |
Fixes #2026
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2027/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2027",
"html_url": "https://github.com/huggingface/datasets/pull/2027",
"diff_url": "https://github.com/huggingface/datasets/pull/2027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2027.patch",
"merged_at": "2021-03-11T14:38:40"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2026
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2026/events
|
https://github.com/huggingface/datasets/issues/2026
| 828,194,467
|
MDU6SXNzdWU4MjgxOTQ0Njc=
| 2,026
|
KeyError on using map after renaming a column
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format`, with a new column name which is why this new column is missing in the output.",
"Hi @mariosasko,\n\nThanks for opening a PR on this :)\nWhy does the old name also disappear?",
"I just merged a @mariosasko 's PR that fixes this issue.\r\nIf it happens again, feel free to re-open :)"
] | 2021-03-10T18:54:17
| 2021-03-11T14:39:34
| 2021-03-11T14:38:40
|
CONTRIBUTOR
| null |
Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])
def prepare_features(examples):
images = []
labels = []
print(examples)
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(examples["image"][example_idx].permute(2,0,1)))
else:
images.append(examples["image"][example_idx].permute(2,0,1))
labels.append(examples["label"][example_idx])
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('cifar10')
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
features = datasets.Features({
"image": datasets.Array3D(shape=(3,32,32),dtype="float32"),
"label": datasets.features.ClassLabel(names=[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]),
})
train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
```
The error:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-54-bf29672c53ee> in <module>()
14 ]),
15 })
---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
2 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1287 test_inputs = self[:2] if batched else self[0]
1288 test_indices = [0, 1] if batched else 0
-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)
1290 logger.info("Testing finished, running the mapping function on the dataset")
1291
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1259 processed_inputs = (
-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1261 )
1262 does_return_dict = isinstance(processed_inputs, Mapping)
<ipython-input-52-b4dccbafb70d> in prepare_features(examples)
3 labels = []
4 print(examples)
----> 5 for example_idx, example in enumerate(examples["image"]):
6 if transform is not None:
7 images.append(transform(examples["image"][example_idx].permute(2,0,1)))
KeyError: 'image'
```
The print statement inside returns this:
```python
{'label': tensor([6, 9])}
```
Apparently, both `img` and `image` do not exist after renaming.
Note that this code works fine with `img` everywhere.
Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2025
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2025/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2025/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2025/events
|
https://github.com/huggingface/datasets/pull/2025
| 828,047,476
|
MDExOlB1bGxSZXF1ZXN0NTg5ODk2NjMz
| 2,025
|
[Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-10T17:00:47
| 2021-03-30T14:46:53
| 2021-03-26T16:51:59
|
MEMBER
| null |
## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pickled/unpickled in-memory
- on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling
## Issues
Because of this assumption, we can't easily implement methods like `Dataset.add_item` to append more rows to a dataset, or `dataset.add_column` to add a column, since we can't mix data from memory and data from the disk.
Moreover, `concatenate_datasets` doesn't work if the datasets to concatenate are not all from memory, or all form the disk.
## Solution provided in this PR
I changed this by allowing several types of Table to be used in the Dataset object.
More specifically I added three pyarrow Table wrappers: InMemoryTable, MemoryMappedTable and ConcatenationTable.
The in-memory and memory-mapped tables implement the pickling behavior described above.
The ConcatenationTable can be made from several tables (either in-memory or memory mapped) called "blocks". Pickling a ConcatenationTable simply pickles the underlying blocks.
## Implementation details
The three tables classes mentioned above all inherit from a `Table` class defined in `table.py`, which is a wrapper of a pyarrow table. The `Table` wrapper implements all the attributes and methods of the underlying pyarrow table.
Regarding the MemoryMappedTable:
Reloading a pyarrow table from the disk makes you lose all the changes you may have applied (slice, rename_columns, drop, cast etc.). Therefore the MemoryMappedTable implements a "replay" mechanism to re-apply the changes when reloading the pyarrow table from the disk.
## Checklist
- [x] add InMemoryTable
- [x] add MemoryMappedTable
- [x] add ConcatenationTable
- [x] Update the ArrowReader to use these new tables depending on the `in_memory` parameter
- [x] Update Dataset.from_xxx methods
- [x] Update load_from_disk and save_to_disk
- [x] Backward compatibility of load_from_disk
- [x] Add tests for the new tables
- [x] Update current tests
- [ ] Documentation
----------
I would be happy to discuss the design of this PR :)
Close #1877
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2025/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2025/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2025",
"html_url": "https://github.com/huggingface/datasets/pull/2025",
"diff_url": "https://github.com/huggingface/datasets/pull/2025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2025.patch",
"merged_at": "2021-03-26T16:51:58"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2024
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2024/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2024/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2024/events
|
https://github.com/huggingface/datasets/pull/2024
| 827,842,962
|
MDExOlB1bGxSZXF1ZXN0NTg5NzEzNDAy
| 2,024
|
Remove print statement from mnist.py
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-10T14:39:58
| 2021-03-11T18:03:52
| 2021-03-11T18:03:51
|
CONTRIBUTOR
| null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2024/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2024/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2024",
"html_url": "https://github.com/huggingface/datasets/pull/2024",
"diff_url": "https://github.com/huggingface/datasets/pull/2024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2024.patch",
"merged_at": null
}
| true
|
|
https://api.github.com/repos/huggingface/datasets/issues/2023
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2023/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2023/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2023/events
|
https://github.com/huggingface/datasets/pull/2023
| 827,819,608
|
MDExOlB1bGxSZXF1ZXN0NTg5NjkyNDU2
| 2,023
|
Add Romanian to XQuAD
|
{
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/followers",
"following_url": "https://api.github.com/users/M-Salti/following{/other_user}",
"gists_url": "https://api.github.com/users/M-Salti/gists{/gist_id}",
"starred_url": "https://api.github.com/users/M-Salti/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/M-Salti/subscriptions",
"organizations_url": "https://api.github.com/users/M-Salti/orgs",
"repos_url": "https://api.github.com/users/M-Salti/repos",
"events_url": "https://api.github.com/users/M-Salti/events{/privacy}",
"received_events_url": "https://api.github.com/users/M-Salti/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-10T14:24:32
| 2021-03-15T10:08:17
| 2021-03-15T10:08:17
|
CONTRIBUTOR
| null |
On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2023/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2023/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2023",
"html_url": "https://github.com/huggingface/datasets/pull/2023",
"diff_url": "https://github.com/huggingface/datasets/pull/2023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2023.patch",
"merged_at": "2021-03-15T10:08:17"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2022
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2022/events
|
https://github.com/huggingface/datasets/issues/2022
| 827,435,033
|
MDU6SXNzdWU4Mjc0MzUwMzM=
| 2,022
|
ValueError when rename_column on splitted dataset
|
{
"login": "simonschoe",
"id": 53626067,
"node_id": "MDQ6VXNlcjUzNjI2MDY3",
"avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonschoe",
"html_url": "https://github.com/simonschoe",
"followers_url": "https://api.github.com/users/simonschoe/followers",
"following_url": "https://api.github.com/users/simonschoe/following{/other_user}",
"gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions",
"organizations_url": "https://api.github.com/users/simonschoe/orgs",
"repos_url": "https://api.github.com/users/simonschoe/repos",
"events_url": "https://api.github.com/users/simonschoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonschoe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use the named splits API (for now):\r\n```python\r\ntrain_ds, test_ds = load_dataset(\r\n path='csv', \r\n delimiter='\\t', \r\n data_files=text_files, \r\n split=['train[:90%]', 'train[-10%:]'],\r\n)\r\n\r\ntrain_ds = train_ds.rename_column('sentence', 'text')\r\n```",
"This has been fixed in #2043 , thanks @mariosasko \r\nThe fix is available on master and we'll do a new release soon :)\r\n\r\nfeel free to re-open if you still have issues"
] | 2021-03-10T09:40:38
| 2021-03-16T14:06:08
| 2021-03-16T14:05:05
|
NONE
| null |
Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_dataset(
path='csv', # use 'text' loading script to load from local txt-files
delimiter='\t', # xxx
data_files=text_files, # list of paths to local text files
split=split, # xxx
)
dataset
```
Part of output:
```python
DatasetDict({
train: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 900
})
test: Dataset({
features: ['sentence', 'sentiment'],
num_rows: 100
})
})
```
Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however:
```python
dataset['train'].rename_column('sentence', 'text')
```
```python
/usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name)
353 for split_name in split_names_from_instruction:
354 if not re.match(_split_re, split_name):
--> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.")
356
357 def __str__(self):
ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('.
```
In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split.
Thanks in advance! :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2021
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2021/events
|
https://github.com/huggingface/datasets/issues/2021
| 826,988,016
|
MDU6SXNzdWU4MjY5ODgwMTY=
| 2,021
|
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching."
] | 2021-03-10T02:48:34
| 2021-03-13T10:07:41
| 2021-03-13T10:07:41
|
NONE
| null |
dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a serious issue with cashing.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2020
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2020/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2020/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2020/events
|
https://github.com/huggingface/datasets/pull/2020
| 826,961,126
|
MDExOlB1bGxSZXF1ZXN0NTg4OTE3MjYx
| 2,020
|
Remove unnecessary docstart check in conll-like datasets
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-10T02:20:16
| 2021-03-11T13:33:37
| 2021-03-11T13:33:37
|
CONTRIBUTOR
| null |
Related to this PR: #1998
Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2020/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2020/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2020",
"html_url": "https://github.com/huggingface/datasets/pull/2020",
"diff_url": "https://github.com/huggingface/datasets/pull/2020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2020.patch",
"merged_at": "2021-03-11T13:33:37"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2019
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2019/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2019/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2019/events
|
https://github.com/huggingface/datasets/pull/2019
| 826,625,706
|
MDExOlB1bGxSZXF1ZXN0NTg4NjEyODgy
| 2,019
|
Replace print with logging in dataset scripts
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T20:59:34
| 2021-03-12T10:09:01
| 2021-03-11T16:14:19
|
CONTRIBUTOR
| null |
Replaces `print(...)` in the dataset scripts with the library logger.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2019/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2019/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2019",
"html_url": "https://github.com/huggingface/datasets/pull/2019",
"diff_url": "https://github.com/huggingface/datasets/pull/2019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2019.patch",
"merged_at": "2021-03-11T16:14:18"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2018
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2018/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2018/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2018/events
|
https://github.com/huggingface/datasets/pull/2018
| 826,473,764
|
MDExOlB1bGxSZXF1ZXN0NTg4NDc0NTQz
| 2,018
|
Md gender card update
|
{
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T18:57:20
| 2021-03-12T17:31:00
| 2021-03-12T17:31:00
|
CONTRIBUTOR
| null |
I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I'll contact the authors to see if they have any additional information or suggested changes.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2018/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2018/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2018",
"html_url": "https://github.com/huggingface/datasets/pull/2018",
"diff_url": "https://github.com/huggingface/datasets/pull/2018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2018.patch",
"merged_at": "2021-03-12T17:31:00"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2017
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2017/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2017/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2017/events
|
https://github.com/huggingface/datasets/pull/2017
| 826,428,578
|
MDExOlB1bGxSZXF1ZXN0NTg4NDMyNDc2
| 2,017
|
Add TF-based Features to handle different modes of data
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T18:29:52
| 2021-03-17T12:32:08
| 2021-03-17T12:32:07
|
CONTRIBUTOR
| null |
Hi,
I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2017/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2017/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2017",
"html_url": "https://github.com/huggingface/datasets/pull/2017",
"diff_url": "https://github.com/huggingface/datasets/pull/2017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2017.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2016
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2016/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2016/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2016/events
|
https://github.com/huggingface/datasets/pull/2016
| 825,965,493
|
MDExOlB1bGxSZXF1ZXN0NTg4MDA5NjEz
| 2,016
|
Not all languages have 2 digit codes.
|
{
"login": "asiddhant",
"id": 13891775,
"node_id": "MDQ6VXNlcjEzODkxNzc1",
"avatar_url": "https://avatars.githubusercontent.com/u/13891775?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/asiddhant",
"html_url": "https://github.com/asiddhant",
"followers_url": "https://api.github.com/users/asiddhant/followers",
"following_url": "https://api.github.com/users/asiddhant/following{/other_user}",
"gists_url": "https://api.github.com/users/asiddhant/gists{/gist_id}",
"starred_url": "https://api.github.com/users/asiddhant/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/asiddhant/subscriptions",
"organizations_url": "https://api.github.com/users/asiddhant/orgs",
"repos_url": "https://api.github.com/users/asiddhant/repos",
"events_url": "https://api.github.com/users/asiddhant/events{/privacy}",
"received_events_url": "https://api.github.com/users/asiddhant/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T13:53:39
| 2021-03-11T18:01:03
| 2021-03-11T18:01:03
|
CONTRIBUTOR
| null |
.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2016/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2016/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2016",
"html_url": "https://github.com/huggingface/datasets/pull/2016",
"diff_url": "https://github.com/huggingface/datasets/pull/2016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2016.patch",
"merged_at": "2021-03-11T18:01:03"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2015
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2015/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2015/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2015/events
|
https://github.com/huggingface/datasets/pull/2015
| 825,942,108
|
MDExOlB1bGxSZXF1ZXN0NTg3OTg4NTQ0
| 2,015
|
Fix ipython function creation in tests
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T13:36:59
| 2021-03-09T14:06:04
| 2021-03-09T14:06:03
|
MEMBER
| null |
The test at `tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function` was failing in python 3.8 because the ipython function was not properly created.
Fix #2010
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2015/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2015/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2015",
"html_url": "https://github.com/huggingface/datasets/pull/2015",
"diff_url": "https://github.com/huggingface/datasets/pull/2015.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2015.patch",
"merged_at": "2021-03-09T14:06:03"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2014
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2014/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2014/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2014/events
|
https://github.com/huggingface/datasets/pull/2014
| 825,916,531
|
MDExOlB1bGxSZXF1ZXN0NTg3OTY1NDg3
| 2,014
|
more explicit method parameters
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T13:18:29
| 2021-03-10T10:08:37
| 2021-03-10T10:08:36
|
CONTRIBUTOR
| null |
re: #2009
not super convinced this is better, and while I usually fight against kwargs here it seems to me that it better conveys the relationship to the `_split_generator` method.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2014/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2014/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2014",
"html_url": "https://github.com/huggingface/datasets/pull/2014",
"diff_url": "https://github.com/huggingface/datasets/pull/2014.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2014.patch",
"merged_at": "2021-03-10T10:08:36"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2013
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2013/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2013/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2013/events
|
https://github.com/huggingface/datasets/pull/2013
| 825,694,305
|
MDExOlB1bGxSZXF1ZXN0NTg3NzYzMTgx
| 2,013
|
Add Cryptonite dataset
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T10:32:11
| 2021-03-09T19:27:07
| 2021-03-09T19:27:06
|
CONTRIBUTOR
| null |
cc @aviaefrat who's the original author of the dataset & paper, see https://github.com/aviaefrat/cryptonite
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2013/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2013/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2013",
"html_url": "https://github.com/huggingface/datasets/pull/2013",
"diff_url": "https://github.com/huggingface/datasets/pull/2013.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2013.patch",
"merged_at": "2021-03-09T19:27:06"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2012
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2012/events
|
https://github.com/huggingface/datasets/issues/2012
| 825,634,064
|
MDU6SXNzdWU4MjU2MzQwNjQ=
| 2,012
|
No upstream branch
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L10-L14",
"~~What difference is there with the default `origin` remote that is set when you clone the repo?~~ I've just understood that this applies to **forks** of the repo 🤡 "
] | 2021-03-09T09:48:55
| 2021-03-09T11:33:31
| 2021-03-09T11:33:31
|
CONTRIBUTOR
| null |
Feels like the documentation on adding a new dataset is outdated?
https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54
There is no upstream branch on remote.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2011
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2011/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2011/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2011/events
|
https://github.com/huggingface/datasets/pull/2011
| 825,621,952
|
MDExOlB1bGxSZXF1ZXN0NTg3Njk4MTAx
| 2,011
|
Add RoSent Dataset
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T09:40:08
| 2021-03-11T18:00:52
| 2021-03-11T18:00:52
|
CONTRIBUTOR
| null |
This PR adds a Romanian sentiment analysis dataset. This PR also closes pending PR #1529.
I had to add an `original_id` feature because the dataset files have repeated IDs. I can remove them if needed. I have also added `id` which is unique.
Let me know in case of any issues.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2011/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2011/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2011",
"html_url": "https://github.com/huggingface/datasets/pull/2011",
"diff_url": "https://github.com/huggingface/datasets/pull/2011.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2011.patch",
"merged_at": "2021-03-11T18:00:52"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2010
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2010/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2010/events
|
https://github.com/huggingface/datasets/issues/2010
| 825,567,635
|
MDU6SXNzdWU4MjU1Njc2MzU=
| 2,010
|
Local testing fails
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?",
"```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n \r\n def create_ipython_func(co_filename, returned_obj):\r\n def func():\r\n return returned_obj\r\n \r\n code = func.__code__\r\n> code = CodeType(*[getattr(code, k) if k != \"co_filename\" else co_filename for k in code_args])\r\nE TypeError: an integer is required (got type bytes)\r\n\r\ntests/test_caching.py:152: TypeError\r\n```\r\n\r\nPython 3.8.8 \r\ndill==0.3.1.1\r\n",
"I managed to reproduce. This comes from the CodeType init signature that is different in python 3.8.8\r\nI opened a PR to fix this test\r\nThanks !"
] | 2021-03-09T09:01:38
| 2021-03-09T14:06:03
| 2021-03-09T14:06:03
|
CONTRIBUTOR
| null |
I'm following the CI setup as described in
https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19
in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4
and getting
```
FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes)
1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04)
```
Seems like a discrepancy with CI, perhaps a lib version that's not controlled?
Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2010/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2009
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2009/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2009/events
|
https://github.com/huggingface/datasets/issues/2009
| 825,541,366
|
MDU6SXNzdWU4MjU1NDEzNjY=
| 2,009
|
Ambiguous documentation
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
|
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "theo-m",
"id": 17948980,
"node_id": "MDQ6VXNlcjE3OTQ4OTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/theo-m",
"html_url": "https://github.com/theo-m",
"followers_url": "https://api.github.com/users/theo-m/followers",
"following_url": "https://api.github.com/users/theo-m/following{/other_user}",
"gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}",
"starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/theo-m/subscriptions",
"organizations_url": "https://api.github.com/users/theo-m/orgs",
"repos_url": "https://api.github.com/users/theo-m/repos",
"events_url": "https://api.github.com/users/theo-m/events{/privacy}",
"received_events_url": "https://api.github.com/users/theo-m/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n \"filepath\": os.path.join(data_dir, \"dev.jsonl\"),\r\n \"split\": \"dev\",\r\n },\r\n),\r\n```\r\n\r\nNotice the `gen_kwargs` argument passed to the constructor of `SplitGenerator`: this dict will be unpacked as keyword arguments to pass to the `_generat_examples` method (in this case the `filepath` and `split` arguments).\r\n\r\nLet me know if that helps!",
"Oh ok I hadn't made the connection between those two, will offer a tweak to the comment and the template then - thanks!"
] | 2021-03-09T08:42:11
| 2021-03-12T15:01:34
| 2021-03-12T15:01:34
|
CONTRIBUTOR
| null |
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158
Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from.
Happy to push a PR with a clearer statement when I understand the meaning.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2009/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2008
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2008/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2008/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2008/events
|
https://github.com/huggingface/datasets/pull/2008
| 825,153,804
|
MDExOlB1bGxSZXF1ZXN0NTg3Mjc1Njk4
| 2,008
|
Fix various typos/grammer in the docs
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-09T01:39:28
| 2021-03-15T18:42:49
| 2021-03-09T10:21:32
|
CONTRIBUTOR
| null |
This PR:
* fixes various typos/grammer I came across while reading the docs
* adds the "Install with conda" installation instructions
Closes #1959
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2008/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2008/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2008",
"html_url": "https://github.com/huggingface/datasets/pull/2008",
"diff_url": "https://github.com/huggingface/datasets/pull/2008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2008.patch",
"merged_at": "2021-03-09T10:21:32"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2007
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2007/events
|
https://github.com/huggingface/datasets/issues/2007
| 824,518,158
|
MDU6SXNzdWU4MjQ1MTgxNTg=
| 2,007
|
How to not load huggingface datasets into memory
|
{
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ",
"The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without filling up your RAM.\r\n\r\nThe only thing that's loaded into memory during training is the batch used in the training step.\r\nSo as long as your model works with batch_size = X, then you can load an even bigger dataset and it will work as well with the same batch_size.\r\n\r\nNote that you still have to take into account that some batches take more memory than others, depending on the texts lengths. If it works for a batch with batch_size = X and with texts of maximum length, then it will work for all batches.\r\n\r\nIn your case I guess that there are a few long sentences in the dataset. For those long sentences you get a memory error on your GPU because they're too long. By passing `max_train_samples` you may have taken a subset of the dataset that only contain short sentences. That's probably why in your case it worked only when you set `max_train_samples`.\r\nI'd suggest you to reduce the batch size so that the batches with long sentences can be loaded on the GPU.\r\n\r\nLet me know if that helps or if you have other questions"
] | 2021-03-08T12:35:26
| 2021-08-04T18:02:25
| 2021-08-04T18:02:25
|
NONE
| null |
Hi
I am running this example from transformers library version 4.3.3:
(Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box)
USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir
(Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py)
If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory.
I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size?
In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set.
thank you so much @lhoestq for your great help in advance
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2006
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2006/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2006/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2006/events
|
https://github.com/huggingface/datasets/pull/2006
| 824,457,794
|
MDExOlB1bGxSZXF1ZXN0NTg2Njg5Nzk2
| 2,006
|
Don't gitignore dvc.lock
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-08T11:13:08
| 2021-03-08T11:28:35
| 2021-03-08T11:28:34
|
MEMBER
| null |
The benchmarks runs are [failing](https://github.com/huggingface/datasets/runs/2055534629?check_suite_focus=true) because of
```
ERROR: 'dvc.lock' is git-ignored.
```
I removed the dvc.lock file from the gitignore to fix that
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2006/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2006/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2006",
"html_url": "https://github.com/huggingface/datasets/pull/2006",
"diff_url": "https://github.com/huggingface/datasets/pull/2006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2006.patch",
"merged_at": "2021-03-08T11:28:34"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2005
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2005/events
|
https://github.com/huggingface/datasets/issues/2005
| 824,275,035
|
MDU6SXNzdWU4MjQyNzUwMzU=
| 2,005
|
Setting to torch format not working with torchvision and MNIST
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with batch size 2, I get an output like this for the `image`:\r\n\r\n```\r\n[[tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor([0, 0]), tensor...\r\n```\r\nFor `label`, it works fine:\r\n```\r\ntensor([7, 6])\r\n```\r\nNote that I didn't specify conversion to torch tensors anywhere.\r\n\r\nBasically, there are two problems here:\r\n1. `dataset.map` doesn't return tensor type objects, even though it uses the transforms, the grayscale conversion in transform was done, but the output was lists only.\r\n2. The `DataLoader` performs its own conversion, which may be not desired.\r\n\r\nI understand that we can't change `DataLoader` because it is a torch functionality, however, is there a way we can handle image data to allow using it with torch `DataLoader` and `torchvision` properly?\r\n\r\nI think if the `image` was a torch tensor (N,H,W,C), or a list of torch tensors (H,W,C), before it is passed to `DataLoader`, then we might not face this issue. ",
"What's the feature types of your new dataset after `.map` ?\r\n\r\nCan you try with adding `features=` in the `.map` call in order to set the \"image\" feature type to `Array2D` ?\r\nThe default feature type is lists of lists, we've not implemented shape verification to use ArrayXD instead of nested lists yet",
"Hi @lhoestq\r\n\r\nRaw feature types are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000 #(type, len)\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'int'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nInside the `prepare_feature` method with batch size 100000 , after processing, they are like this:\r\n\r\nInside Prepare Train Features\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter map, the feature type are like this:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\n\r\nAfter dataloader with batch size 2, the batch features are like this:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n<hr>\r\n\r\nWhen I was setting the format of `train_dataset` to 'torch' after mapping - \r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nCorresponding DataLoader batch:\r\n```\r\nFrom DataLoader batch features\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nI will check with features and get back.\r\n\r\n\r\n\r\n",
"Hi @lhoestq\r\n\r\n# Using Array3D\r\nI tried this:\r\n```python\r\nfeatures = datasets.Features({\r\n \"image\": datasets.Array3D(shape=(1,28,28),dtype=\"float32\"),\r\n \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n })\r\ntrain_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n```\r\nand it didn't fix the issue.\r\n\r\nDuring the `prepare_train_features:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter the `map`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'float'>\r\nLabel:\r\n<class 'list'> 60000\r\n<class 'int'>\r\n```\r\nFrom the DataLoader batch:\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\nIt is the same as before.\r\n\r\n---\r\n\r\nUsing `datasets.Sequence(datasets.Array2D(shape=(28,28),dtype=\"float32\"))` gave an error during `map`:\r\n\r\n```python\r\nArrowNotImplementedError Traceback (most recent call last)\r\n<ipython-input-95-d28e69289084> in <module>()\r\n 3 \"label\": datasets.features.ClassLabel(names=[\"0\", \"1\", \"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\"]),\r\n 4 })\r\n----> 5 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)\r\n\r\n15 frames\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py in <dictcomp>(.0)\r\n 446 num_proc=num_proc,\r\n 447 )\r\n--> 448 for k, dataset in self.items()\r\n 449 }\r\n 450 )\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)\r\n 1307 fn_kwargs=fn_kwargs,\r\n 1308 new_fingerprint=new_fingerprint,\r\n-> 1309 update_data=update_data,\r\n 1310 )\r\n 1311 else:\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)\r\n 202 }\r\n 203 # apply actual function\r\n--> 204 out: Union[\"Dataset\", \"DatasetDict\"] = func(self, *args, **kwargs)\r\n 205 datasets: List[\"Dataset\"] = list(out.values()) if isinstance(out, dict) else [out]\r\n 206 # re-apply format to the output\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)\r\n 335 # Call actual function\r\n 336 \r\n--> 337 out = func(self, *args, **kwargs)\r\n 338 \r\n 339 # Update fingerprint of in-place transforms + update in-place history of transforms\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)\r\n 1580 if update_data:\r\n 1581 batch = cast_to_python_objects(batch)\r\n-> 1582 writer.write_batch(batch)\r\n 1583 if update_data:\r\n 1584 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in write_batch(self, batch_examples, writer_batch_size)\r\n 274 typed_sequence = TypedSequence(batch_examples[col], type=col_type, try_type=col_try_type)\r\n 275 typed_sequence_examples[col] = typed_sequence\r\n--> 276 pa_table = pa.Table.from_pydict(typed_sequence_examples)\r\n 277 self.write_table(pa_table, writer_batch_size)\r\n 278 \r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/table.pxi in pyarrow.lib.Table.from_pydict()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.asarray()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()\r\n\r\n/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in __arrow_array__(self, type)\r\n 95 out = pa.ExtensionArray.from_storage(type, pa.array(self.data, type.storage_dtype))\r\n 96 else:\r\n---> 97 out = pa.array(self.data, type=type)\r\n 98 if trying_type and out[0].as_py() != self.data[0]:\r\n 99 raise TypeError(\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()\r\n\r\n/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()\r\n\r\nArrowNotImplementedError: extension\r\n```",
"# Convert raw tensors to torch format\r\nStrangely, converting to torch tensors works perfectly on `raw_dataset`:\r\n```python\r\nraw_dataset.set_format('torch',columns=['image','label'])\r\n```\r\nTypes:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nUsing this for transforms:\r\n```python\r\ndef prepare_features(examples):\r\n images = []\r\n labels = []\r\n for example_idx, example in enumerate(examples[\"image\"]):\r\n if transform is not None:\r\n images.append(transform(\r\n examples[\"image\"][example_idx].numpy()\r\n ))\r\n else:\r\n images.append(examples[\"image\"][example_idx].numpy())\r\n labels.append(examples[\"label\"][example_idx])\r\n output = {\"label\":labels, \"image\":images}\r\n return output\r\n```\r\n\r\nInside `prepare_train_features`:\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batch:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n\r\n## Using `torch` format:\r\n```\r\nImage:\r\n<class 'list'> 60000\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\nDataLoader batches:\r\n\r\n```\r\nImage:\r\n<class 'list'> 1\r\n<class 'list'> 28\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\n---\r\n## Using the features - `Array3D`:\r\n\r\n```\r\nImage:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'list'> 10000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter `map`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 60000\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nAfter DataLoader `batch`:\r\n```\r\nImage:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'> 1\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'> 28\r\n<class 'torch.Tensor'>\r\nLabel:\r\n<class 'torch.Tensor'> 2\r\n<class 'torch.Tensor'>\r\n```\r\n\r\nThe last one works perfectly.\r\n\r\n\r\n\r\nI wonder why this worked, and others didn't.\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"Concluding, the way it works right now is:\r\n\r\n1. Converting raw dataset to `torch` format.\r\n2. Use the transform and apply using `map`, ensure the returned values are tensors. \r\n3. When mapping, use `features` with `image` being `Array3D` type.",
"What the dataset returns depends on the feature type.\r\nFor a feature type that is Sequence(Sequence(Sequence(Value(\"uint8\")))), a dataset formatted as \"torch\" return lists of lists of tensors. This is because the lists lengths may vary.\r\nFor a feature type that is Array3D on the other hand it returns one tensor. This is because the size of the tensor is fixed and defined bu the Array3D type.",
"Okay, that makes sense.\r\nRaw images are list of Array2D, hence we get a single tensor when `set_format` is used. But, why should I need to convert the raw images to `torch` format when `map` does this internally?\r\n\r\nUsing `Array3D` did not work with `map` when raw images weren't `set_format`ted to torch type.",
"I understand that `map` needs to know what kind of output tensors are expected, and thus converting the raw dataset to `torch` format is necessary. Closing the issue since it is resolved."
] | 2021-03-08T07:38:11
| 2021-03-09T17:58:13
| 2021-03-09T17:58:13
|
CONTRIBUTOR
| null |
Hi
I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object.
A snippet of what I am trying to do:
```python
def prepare_features(examples):
images = []
labels = []
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(
np.array(examples["image"][example_idx], dtype=np.uint8)
))
else:
images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8)))
labels.append(torch.tensor(examples["label"][example_idx]))
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('mnist')
train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000)
train_dataset.set_format("torch",columns=["image","label"])
```
After this, I check the type of the following:
```python
print(type(train_dataset["train"]["label"]))
print(type(train_dataset["train"]["image"][0]))
```
This leads to the following output:
```python
<class 'torch.Tensor'>
<class 'list'>
```
I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`.
I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue?
Thanks,
Gunjan
EDIT:
I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28).
EDIT 2:
Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2004
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2004/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2004/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2004/events
|
https://github.com/huggingface/datasets/pull/2004
| 824,080,760
|
MDExOlB1bGxSZXF1ZXN0NTg2MzcyODY1
| 2,004
|
LaRoSeDa
|
{
"login": "MihaelaGaman",
"id": 6823177,
"node_id": "MDQ6VXNlcjY4MjMxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MihaelaGaman",
"html_url": "https://github.com/MihaelaGaman",
"followers_url": "https://api.github.com/users/MihaelaGaman/followers",
"following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}",
"gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions",
"organizations_url": "https://api.github.com/users/MihaelaGaman/orgs",
"repos_url": "https://api.github.com/users/MihaelaGaman/repos",
"events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/MihaelaGaman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-08T01:06:32
| 2021-03-17T10:43:20
| 2021-03-17T10:43:20
|
CONTRIBUTOR
| null |
Add LaRoSeDa to huggingface datasets.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2004/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2004/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2004",
"html_url": "https://github.com/huggingface/datasets/pull/2004",
"diff_url": "https://github.com/huggingface/datasets/pull/2004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2004.patch",
"merged_at": "2021-03-17T10:43:20"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2002
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2002/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2002/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2002/events
|
https://github.com/huggingface/datasets/pull/2002
| 823,955,744
|
MDExOlB1bGxSZXF1ZXN0NTg2MjgwNzE3
| 2,002
|
MOROCO
|
{
"login": "MihaelaGaman",
"id": 6823177,
"node_id": "MDQ6VXNlcjY4MjMxNzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/6823177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MihaelaGaman",
"html_url": "https://github.com/MihaelaGaman",
"followers_url": "https://api.github.com/users/MihaelaGaman/followers",
"following_url": "https://api.github.com/users/MihaelaGaman/following{/other_user}",
"gists_url": "https://api.github.com/users/MihaelaGaman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MihaelaGaman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MihaelaGaman/subscriptions",
"organizations_url": "https://api.github.com/users/MihaelaGaman/orgs",
"repos_url": "https://api.github.com/users/MihaelaGaman/repos",
"events_url": "https://api.github.com/users/MihaelaGaman/events{/privacy}",
"received_events_url": "https://api.github.com/users/MihaelaGaman/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-07T16:22:17
| 2021-03-19T09:52:06
| 2021-03-19T09:52:06
|
CONTRIBUTOR
| null |
Add MOROCO to huggingface datasets.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2002/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2002/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2002",
"html_url": "https://github.com/huggingface/datasets/pull/2002",
"diff_url": "https://github.com/huggingface/datasets/pull/2002.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2002.patch",
"merged_at": "2021-03-19T09:52:06"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/2001
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2001/events
|
https://github.com/huggingface/datasets/issues/2001
| 823,946,706
|
MDU6SXNzdWU4MjM5NDY3MDY=
| 2,001
|
Empty evidence document ("provenance") in KILT ELI5 dataset
|
{
"login": "donggyukimc",
"id": 16605764,
"node_id": "MDQ6VXNlcjE2NjA1NzY0",
"avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/donggyukimc",
"html_url": "https://github.com/donggyukimc",
"followers_url": "https://api.github.com/users/donggyukimc/followers",
"following_url": "https://api.github.com/users/donggyukimc/following{/other_user}",
"gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions",
"organizations_url": "https://api.github.com/users/donggyukimc/orgs",
"repos_url": "https://api.github.com/users/donggyukimc/repos",
"events_url": "https://api.github.com/users/donggyukimc/events{/privacy}",
"received_events_url": "https://api.github.com/users/donggyukimc/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Why did you close this issue? How did you end up finding the evidence documents? I'm running into a similar issue with other KILT tasks."
] | 2021-03-07T15:41:35
| 2022-12-19T19:25:14
| 2021-03-17T05:51:01
|
NONE
| null |
In the original KILT benchmark(https://github.com/facebookresearch/KILT),
all samples has its evidence document (i.e. wikipedia page id) for prediction.
For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this
`{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}`
However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance.
`{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]}
`
should i perform other procedure to obtain evidence documents?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/2000
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/2000/comments
|
https://api.github.com/repos/huggingface/datasets/issues/2000/events
|
https://github.com/huggingface/datasets/issues/2000
| 823,899,910
|
MDU6SXNzdWU4MjM4OTk5MTA=
| 2,000
|
Windows Permission Error (most recent version of datasets)
|
{
"login": "itsLuisa",
"id": 73881148,
"node_id": "MDQ6VXNlcjczODgxMTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itsLuisa",
"html_url": "https://github.com/itsLuisa",
"followers_url": "https://api.github.com/users/itsLuisa/followers",
"following_url": "https://api.github.com/users/itsLuisa/following{/other_user}",
"gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions",
"organizations_url": "https://api.github.com/users/itsLuisa/orgs",
"repos_url": "https://api.github.com/users/itsLuisa/repos",
"events_url": "https://api.github.com/users/itsLuisa/events{/privacy}",
"received_events_url": "https://api.github.com/users/itsLuisa/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ",
"Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 537, in incomplete_dir\r\n yield tmp_dir\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 578, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 656, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 982, in _prepare_split\r\n num_examples, num_bytes = writer.finalize()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 297, in finalize\r\n self.write_on_file()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 230, in write_on_file\r\n pa_array = pa.array(typed_sequence)\r\n File \"pyarrow\\array.pxi\", line 222, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 110, in pyarrow.lib._handle_arrow_array_protocol\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\arrow_writer.py\", line 97, in __arrow_array__\r\n out = pa.array(self.data, type=type)\r\n File \"pyarrow\\array.pxi\", line 305, in pyarrow.lib.array\r\n File \"pyarrow\\array.pxi\", line 39, in pyarrow.lib._sequence_to_array\r\n File \"pyarrow\\error.pxi\", line 122, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow\\error.pxi\", line 107, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowTypeError: Expected bytes, got a 'list' object\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 122, in <module>\r\n main()\r\n File \"C:/Users/Luisa/Documents/Uni/WS 2020,21/Neural Networks/Final_Project/NN_Project/data_loading.py\", line 111, in main\r\n dataset = datasets.load_dataset(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\load.py\", line 740, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 586, in download_and_prepare\r\n self._save_info()\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\contextlib.py\", line 131, in __exit__\r\n self.gen.throw(type, value, traceback)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\site-packages\\datasets\\builder.py\", line 543, in incomplete_dir\r\n shutil.rmtree(tmp_dir)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 740, in rmtree\r\n return _rmtree_unsafe(path, onerror)\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 618, in _rmtree_unsafe\r\n onerror(os.unlink, fullname, sys.exc_info())\r\n File \"C:\\Users\\Luisa\\AppData\\Local\\Programs\\Python\\Python38\\lib\\shutil.py\", line 616, in _rmtree_unsafe\r\n os.unlink(fullname)\r\nPermissionError: [WinError 32] Der Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen Prozess verwendet wird: 'C:\\\\Users\\\\Luisa\\\\.cache\\\\huggingface\\\\datasets\\\\sample\\\\default-20ee7d51a6a9454f\\\\0.0.0\\\\5fc4c3a355ea77ab446bd31fca5082437600b8364d29b2b95264048bd1f398b1.incomplete\\\\sample-train.arrow'\r\n\r\nProcess finished with exit code 1\r\n```",
"Hi @itsLuisa, thanks for sharing the Traceback.\r\n\r\nYou are defining the \"id\" field as a `string` feature:\r\n```python\r\nclass Sample(datasets.GeneratorBasedBuilder):\r\n ...\r\n\r\n def _info(self):\r\n return datasets.DatasetInfo(\r\n features=datasets.Features(\r\n {\r\n \"id\": datasets.Value(\"string\"),\r\n # ^^ here\r\n \"tokens\": datasets.Sequence(datasets.Value(\"string\")),\r\n \"pos_tags\": datasets.Sequence(datasets.features.ClassLabel(names=[...])),\r\n[...]\r\n```\r\n\r\nBut in the `_generate_examples`, the \"id\" field is a list:\r\n```python\r\nids = list()\r\n```\r\n\r\nChanging:\r\n```python\r\n\"id\": datasets.Value(\"string\"),\r\n```\r\nInto:\r\n```python\r\n\"id\": datasets.Sequence(datasets.Value(\"string\")),\r\n```\r\n\r\nShould fix your issue.\r\n\r\nLet me know if this helps!",
"It seems to be working now, thanks a lot for the help, @SBrandeis !",
"Glad to hear it!\r\nI'm closing the issue"
] | 2021-03-07T11:55:28
| 2021-03-09T12:42:57
| 2021-03-09T12:42:57
|
NONE
| null |
Hi everyone,
Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance!
Luisa
My script:
```
import datasets
import csv
logger = datasets.logging.get_logger(__name__)
class SampleConfig(datasets.BuilderConfig):
def __init__(self, **kwargs):
super(SampleConfig, self).__init__(**kwargs)
class Sample(datasets.GeneratorBasedBuilder):
BUILDER_CONFIGS = [
SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"),
]
def _info(self):
return datasets.DatasetInfo(
description="Dataset with words and their POS-Tags",
features=datasets.Features(
{
"id": datasets.Value("string"),
"tokens": datasets.Sequence(datasets.Value("string")),
"pos_tags": datasets.Sequence(
datasets.features.ClassLabel(
names=[
"''",
",",
"-LRB-",
"-RRB-",
".",
":",
"CC",
"CD",
"DT",
"EX",
"FW",
"HYPH",
"IN",
"JJ",
"JJR",
"JJS",
"MD",
"NN",
"NNP",
"NNPS",
"NNS",
"PDT",
"POS",
"PRP",
"PRP$",
"RB",
"RBR",
"RBS",
"RP",
"TO",
"UH",
"VB",
"VBD",
"VBG",
"VBN",
"VBP",
"VBZ",
"WDT",
"WP",
"WRB",
"``"
]
)
),
}
),
supervised_keys=None,
homepage="https://catalog.ldc.upenn.edu/LDC2011T03",
citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.",
)
def _split_generators(self, dl_manager):
loaded_files = dl_manager.download_and_extract(self.config.data_files)
return [
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}),
datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}),
datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]})
]
def _generate_examples(self, filepath):
logger.info("generating examples from = %s", filepath)
with open(filepath, encoding="cp1252") as f:
data = csv.reader(f, delimiter="\t")
ids = list()
tokens = list()
pos_tags = list()
for id_, line in enumerate(data):
#print(line)
if len(line) == 1:
if tokens:
yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags}
ids = list()
tokens = list()
pos_tags = list()
else:
ids.append(line[0])
tokens.append(line[1])
pos_tags.append(line[2])
# last example
yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags}
def main():
dataset = datasets.load_dataset(
"data_loading.py", data_files={
"train": "train.tsv",
"test": "test.tsv",
"val": "val.tsv"
}
)
#print(dataset)
if __name__=="__main__":
main()
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/2000/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1999
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1999/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1999/events
|
https://github.com/huggingface/datasets/pull/1999
| 823,753,591
|
MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy
| 1,999
|
Add FashionMNIST dataset
|
{
"login": "gchhablani",
"id": 29076344,
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gchhablani",
"html_url": "https://github.com/gchhablani",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-06T21:36:57
| 2021-03-09T09:52:11
| 2021-03-09T09:52:11
|
CONTRIBUTOR
| null |
This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1999/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1999",
"html_url": "https://github.com/huggingface/datasets/pull/1999",
"diff_url": "https://github.com/huggingface/datasets/pull/1999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1999.patch",
"merged_at": "2021-03-09T09:52:11"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1998
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1998/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1998/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1998/events
|
https://github.com/huggingface/datasets/pull/1998
| 823,723,960
|
MDExOlB1bGxSZXF1ZXN0NTg2MTE4NTQ4
| 1,998
|
Add -DOCSTART- note to dataset card of conll-like datasets
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-06T19:08:29
| 2021-03-11T02:20:07
| 2021-03-11T02:20:07
|
CONTRIBUTOR
| null |
Closes #1983
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1998/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1998/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1998",
"html_url": "https://github.com/huggingface/datasets/pull/1998",
"diff_url": "https://github.com/huggingface/datasets/pull/1998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1998.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1997
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1997/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1997/events
|
https://github.com/huggingface/datasets/issues/1997
| 823,679,465
|
MDU6SXNzdWU4MjM2Nzk0NjU=
| 1,997
|
from datasets import MoleculeDataset, GEOMDataset
|
{
"login": "futianfan",
"id": 5087210,
"node_id": "MDQ6VXNlcjUwODcyMTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/futianfan",
"html_url": "https://github.com/futianfan",
"followers_url": "https://api.github.com/users/futianfan/followers",
"following_url": "https://api.github.com/users/futianfan/following{/other_user}",
"gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/futianfan/subscriptions",
"organizations_url": "https://api.github.com/users/futianfan/orgs",
"repos_url": "https://api.github.com/users/futianfan/repos",
"events_url": "https://api.github.com/users/futianfan/events{/privacy}",
"received_events_url": "https://api.github.com/users/futianfan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] | null |
[] | 2021-03-06T15:50:19
| 2021-03-06T16:13:26
| 2021-03-06T16:13:26
|
NONE
| null |
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1997/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1996
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1996/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1996/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1996/events
|
https://github.com/huggingface/datasets/issues/1996
| 823,573,410
|
MDU6SXNzdWU4MjM1NzM0MTA=
| 1,996
|
Error when exploring `arabic_speech_corpus`
|
{
"login": "elgeish",
"id": 6879673,
"node_id": "MDQ6VXNlcjY4Nzk2NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elgeish",
"html_url": "https://github.com/elgeish",
"followers_url": "https://api.github.com/users/elgeish/followers",
"following_url": "https://api.github.com/users/elgeish/following{/other_user}",
"gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}",
"starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/elgeish/subscriptions",
"organizations_url": "https://api.github.com/users/elgeish/orgs",
"repos_url": "https://api.github.com/users/elgeish/repos",
"events_url": "https://api.github.com/users/elgeish/events{/privacy}",
"received_events_url": "https://api.github.com/users/elgeish/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
},
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
},
{
"id": 2725241052,
"node_id": "MDU6TGFiZWwyNzI1MjQxMDUy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/speech",
"name": "speech",
"color": "d93f0b",
"default": false,
"description": ""
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting! We'll fix that as soon as possible",
"Actually soundfile is not a dependency of this dataset.\r\nThe error comes from a bug that was fixed in this commit: https://github.com/huggingface/datasets/pull/1767/commits/c304e63629f4453367de2fd42883a78768055532\r\nBasically the library used to consider the `import soundfile` in the docstring as a dependency, while it's just here as a code example.\r\n\r\nUpdating the viewer to the latest version of `datasets` should fix this issue\r\n",
"Hi! The viewer at https://huggingface.co/datasets/arabic_speech_corpus works fine. Closing."
] | 2021-03-06T05:55:20
| 2022-10-05T13:24:26
| 2022-10-05T13:24:26
|
NONE
| null |
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus
Error:
```
ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance'
Traceback:
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 233, in <module>
configs = get_confs(option)
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func
return get_or_create_cached_value()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value
return_value = func(*args, **kwargs)
File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs
module_path = nlp.load.prepare_module(path, dataset=True
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module
f"To be able to use this {module_type}, you need to install the following dependencies"
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1996/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1996/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1995
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1995/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1995/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1995/events
|
https://github.com/huggingface/datasets/pull/1995
| 822,878,431
|
MDExOlB1bGxSZXF1ZXN0NTg1NDI5NTg0
| 1,995
|
[Timit_asr] Make sure not only the first sample is used
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-05T08:42:51
| 2021-06-30T06:25:53
| 2021-03-05T08:58:59
|
MEMBER
| null |
When playing around with timit I noticed that only the first sample is used for all indices. I corrected this typo so that the dataset is correctly loaded.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1995/reactions",
"total_count": 4,
"+1": 4,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1995/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1995",
"html_url": "https://github.com/huggingface/datasets/pull/1995",
"diff_url": "https://github.com/huggingface/datasets/pull/1995.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1995.patch",
"merged_at": "2021-03-05T08:58:59"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1993
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1993/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1993/events
|
https://github.com/huggingface/datasets/issues/1993
| 822,758,387
|
MDU6SXNzdWU4MjI3NTgzODc=
| 1,993
|
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset",
"Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/research_projects/rag/use_own_knowledge_dataset.py#L80). In the 80 you can save the dataset object to the disk with save_to_disk. Then in order to compute the embeddings in this use **load_from_disk**. \r\n\r\nThen finally save it. You can see the original dataset object (CSV after splitting also will be changed)\r\n\r\nOne more thing- when I save the dataset object with **save_to_disk** it name the arrow file with cache.... rather than using dataset. arrow. Can you add a variable that we can feed a name to save_to_disk function?",
"@lhoestq I also found that cache in tmp directory gets updated after transformations. This is really problematic when using datasets interactively. Let's say we use the shards function to a dataset loaded with csv, atm when we do transformations to shards and combine them it updates the original csv cache. ",
"I plan to update the save_to_disk method in #2025 so I can make sure the new save_to_disk doesn't corrupt your cache files.\r\nBut from your last message it looks like save_to_disk isn't the root cause right ?",
"ok, one more thing. When we use save_to_disk there are two files other than .arrow. dataset_info.json and state.json. Sometimes most of the fields in the dataset_infor.json are null, especially when saving dataset objects. Anyways I think load_from_disk uses the arrow files mentioned in state.json right? ",
"> Anyways I think load_from_disk uses the arrow files mentioned in state.json right?\r\n\r\nYes exactly",
"Perfect. For now, I am loading the dataset from CSV in my interactive process and will wait until you make the PR!"
] | 2021-03-05T05:25:50
| 2021-03-22T04:05:50
| 2021-03-22T04:05:50
|
NONE
| null |
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place.
When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1993/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1991
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1991/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1991/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1991/events
|
https://github.com/huggingface/datasets/pull/1991
| 822,554,473
|
MDExOlB1bGxSZXF1ZXN0NTg1MTYwNDkx
| 1,991
|
Adding the conllpp dataset
|
{
"login": "ZihanWangKi",
"id": 21319243,
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihanWangKi",
"html_url": "https://github.com/ZihanWangKi",
"followers_url": "https://api.github.com/users/ZihanWangKi/followers",
"following_url": "https://api.github.com/users/ZihanWangKi/following{/other_user}",
"gists_url": "https://api.github.com/users/ZihanWangKi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ZihanWangKi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ZihanWangKi/subscriptions",
"organizations_url": "https://api.github.com/users/ZihanWangKi/orgs",
"repos_url": "https://api.github.com/users/ZihanWangKi/repos",
"events_url": "https://api.github.com/users/ZihanWangKi/events{/privacy}",
"received_events_url": "https://api.github.com/users/ZihanWangKi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-04T22:19:43
| 2021-03-17T10:37:39
| 2021-03-17T10:37:39
|
CONTRIBUTOR
| null |
Adding the conllpp dataset, is a revision from https://github.com/huggingface/datasets/pull/1910.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1991/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1991/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1991",
"html_url": "https://github.com/huggingface/datasets/pull/1991",
"diff_url": "https://github.com/huggingface/datasets/pull/1991.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1991.patch",
"merged_at": "2021-03-17T10:37:39"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1990
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1990/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1990/events
|
https://github.com/huggingface/datasets/issues/1990
| 822,384,502
|
MDU6SXNzdWU4MjIzODQ1MDI=
| 1,990
|
OSError: Memory mapping file failed: Cannot allocate memory
|
{
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you",
"It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load large dataset files without filling up memory.\r\n\r\nWhat dataset did you use to get this error ?\r\nOn what OS are you running ? What's your python and pyarrow version ?",
"Dear @lhoestq \r\nthank you so much for coming back to me. Please find info below:\r\n1) Dataset name: I used wikipedia with config 20200501.en\r\n2) I got these pyarrow in my environment:\r\npyarrow 2.0.0 <pip>\r\npyarrow 3.0.0 <pip>\r\n\r\n3) python version 3.7.10\r\n4) OS version \r\n\r\nlsb_release -a\r\nNo LSB modules are available.\r\nDistributor ID:\tDebian\r\nDescription:\tDebian GNU/Linux 10 (buster)\r\nRelease:\t10\r\nCodename:\tbuster\r\n\r\n\r\nIs there a way I could solve the memory issue and if I could run this model, I am using GeForce GTX 108, \r\nthanks \r\n",
"I noticed that the error happens when loading the validation dataset.\r\nWhat value of `data_args.validation_split_percentage` did you use ?",
"Dear @lhoestq \r\n\r\nthank you very much for the very sharp observation, indeed, this happens there, I use the default value of 5, I basically plan to subsample a part of the large dataset and choose it as validation set. Do you think this is bringing the data into memory during subsampling? Is there a way I could avoid this?\r\n\r\nThank you very much for the great help.\r\n\r\n\r\nOn Mon, Mar 8, 2021 at 11:28 AM Quentin Lhoest ***@***.***>\r\nwrote:\r\n\r\n> I noticed that the error happens when loading the validation dataset.\r\n> What value of data_args.validation_split_percentage did you use ?\r\n>\r\n> —\r\n> You are receiving this because you authored the thread.\r\n> Reply to this email directly, view it on GitHub\r\n> <https://github.com/huggingface/datasets/issues/1990#issuecomment-792655644>,\r\n> or unsubscribe\r\n> <https://github.com/notifications/unsubscribe-auth/AS37NMS337ZUJ7HGGVVCCR3TCSREFANCNFSM4YTYAQ2A>\r\n> .\r\n>\r\n",
"Methods like `dataset.shard`, `dataset.train_test_split`, `dataset.select` etc. don't bring the dataset in memory. \r\nThe only time when samples are brought to memory is when you access elements via `dataset[0]`, `dataset[:10]`, `dataset[\"my_column_names\"]`.\r\n\r\nBut it's possible that trying to use those methods to build your validation set doesn't fix the issue since, if I understand correctly, the error happens when when the dataset arrow file is opened (just before the 5% percentage is applied).\r\n\r\nDid you try to reproduce this issue in a google colab ? This would be super helpful to investigate why this happened.\r\n\r\nAlso maybe you can try clearing your cache at `~/.cache/huggingface/datasets` and try again. If the arrow file was corrupted somehow, removing it and rebuilding may fix the issue."
] | 2021-03-04T18:21:58
| 2021-08-04T18:04:25
| 2021-08-04T18:04:25
|
NONE
| null |
Hi,
I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py
```
python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128
```
I am using transformer version: 4.3.2
But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset?
Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions:
```
File "run_mlm.py", line 441, in <module>
main()
File "run_mlm.py", line 233, in main
split=f"train[{data_args.validation_split_percentage}%:]",
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset
map_tuple=True,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested
return function(data_struct)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset
in_memory=in_memory,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset
in_memory=in_memory,
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table
stream = stream_from(filename)
File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map
File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Memory mapping file failed: Cannot allocate memory
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1990/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1988
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1988/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1988/events
|
https://github.com/huggingface/datasets/issues/1988
| 822,324,605
|
MDU6SXNzdWU4MjIzMjQ2MDU=
| 1,988
|
Readme.md is misleading about kinds of datasets?
|
{
"login": "surak",
"id": 878399,
"node_id": "MDQ6VXNlcjg3ODM5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/surak",
"html_url": "https://github.com/surak",
"followers_url": "https://api.github.com/users/surak/followers",
"following_url": "https://api.github.com/users/surak/following{/other_user}",
"gists_url": "https://api.github.com/users/surak/gists{/gist_id}",
"starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/surak/subscriptions",
"organizations_url": "https://api.github.com/users/surak/orgs",
"repos_url": "https://api.github.com/users/surak/repos",
"events_url": "https://api.github.com/users/surak/events{/privacy}",
"received_events_url": "https://api.github.com/users/surak/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)"
] | 2021-03-04T17:04:20
| 2021-08-04T18:05:23
| 2021-08-04T18:05:23
|
NONE
| null |
Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You mention other kinds of datasets, with images and so on. I'm confused.
Is it possible to use it to store, say, imagenet locally?
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1988/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1987
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1987/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1987/events
|
https://github.com/huggingface/datasets/issues/1987
| 822,308,956
|
MDU6SXNzdWU4MjIzMDg5NTY=
| 1,987
|
wmt15 is broken
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"It's reachable for the viewer and me, so I suppose it was down at that moment?"
] | 2021-03-04T16:46:25
| 2022-10-05T13:12:26
| 2022-10-05T13:12:26
|
MEMBER
| null |
While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken:
```
python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")'
Downloading: 2.91kB [00:00, 818kB/s]
Downloading: 3.02kB [00:00, 897kB/s]
Downloading: 41.1kB [00:00, 19.1MB/s]
Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators
downloaded_files = dl_manager.download_and_extract(urls_to_download)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download
downloaded_path_or_paths = map_nested(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested
mapped = [
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp>
mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar]
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path
output_path = get_from_cache(
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1987/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1986
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1986/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1986/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1986/events
|
https://github.com/huggingface/datasets/issues/1986
| 822,176,290
|
MDU6SXNzdWU4MjIxNzYyOTA=
| 1,986
|
wmt datasets fail to load
|
{
"login": "sabania",
"id": 32322564,
"node_id": "MDQ6VXNlcjMyMzIyNTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/32322564?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sabania",
"html_url": "https://github.com/sabania",
"followers_url": "https://api.github.com/users/sabania/followers",
"following_url": "https://api.github.com/users/sabania/following{/other_user}",
"gists_url": "https://api.github.com/users/sabania/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sabania/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sabania/subscriptions",
"organizations_url": "https://api.github.com/users/sabania/orgs",
"repos_url": "https://api.github.com/users/sabania/repos",
"events_url": "https://api.github.com/users/sabania/events{/privacy}",
"received_events_url": "https://api.github.com/users/sabania/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"caching issue, seems to work again.."
] | 2021-03-04T14:18:55
| 2021-03-04T14:31:07
| 2021-03-04T14:31:07
|
NONE
| null |
~\.cache\huggingface\modules\datasets_modules\datasets\wmt14\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\wmt_utils.py in _split_generators(self, dl_manager)
758 # Extract manually downloaded files.
759 manual_files = dl_manager.extract(manual_paths_dict)
--> 760 extraction_map = dict(downloaded_files, **manual_files)
761
762 for language in self.config.language_pair:
TypeError: type object argument after ** must be a mapping, not list
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1986/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1986/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1985
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1985/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1985/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1985/events
|
https://github.com/huggingface/datasets/pull/1985
| 822,170,651
|
MDExOlB1bGxSZXF1ZXN0NTg0ODM4NjIw
| 1,985
|
Optimize int precision
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-04T14:12:23
| 2021-03-22T12:04:40
| 2021-03-16T09:44:00
|
MEMBER
| null |
Optimize int precision to reduce dataset file size.
Close #1973, close #1825, close #861.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1985/reactions",
"total_count": 3,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1985/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1985",
"html_url": "https://github.com/huggingface/datasets/pull/1985",
"diff_url": "https://github.com/huggingface/datasets/pull/1985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1985.patch",
"merged_at": "2021-03-16T09:44:00"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1984
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1984/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1984/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1984/events
|
https://github.com/huggingface/datasets/issues/1984
| 821,816,588
|
MDU6SXNzdWU4MjE4MTY1ODg=
| 1,984
|
Add tests for WMT datasets
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Dummy data generation is deprecated now. Closing."
] | 2021-03-04T06:46:42
| 2022-11-04T14:19:16
| 2022-11-04T14:19:16
|
MEMBER
| null |
As requested in #1981, we need tests for WMT datasets, using dummy data.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1984/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1984/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1983
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1983/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1983/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1983/events
|
https://github.com/huggingface/datasets/issues/1983
| 821,746,008
|
MDU6SXNzdWU4MjE3NDYwMDg=
| 1,983
|
The size of CoNLL-2003 is not consistant with the official release.
|
{
"login": "h-peng17",
"id": 39556019,
"node_id": "MDQ6VXNlcjM5NTU2MDE5",
"avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/h-peng17",
"html_url": "https://github.com/h-peng17",
"followers_url": "https://api.github.com/users/h-peng17/followers",
"following_url": "https://api.github.com/users/h-peng17/following{/other_user}",
"gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}",
"starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions",
"organizations_url": "https://api.github.com/users/h-peng17/orgs",
"repos_url": "https://api.github.com/users/h-peng17/repos",
"events_url": "https://api.github.com/users/h-peng17/events{/privacy}",
"received_events_url": "https://api.github.com/users/h-peng17/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi,\r\n\r\nif you inspect the raw data, you can find there are 946 occurrences of `-DOCSTART- -X- -X- O` in the train split and `14041 + 946 = 14987`, which is exactly the number of sentences the authors report. `-DOCSTART-` is a special line that acts as a boundary between two different documents and is filtered out in our implementation.\r\n\r\n@lhoestq What do you think about including these lines? ([Link](https://github.com/flairNLP/flair/issues/1097) to a similar issue in the flairNLP repo)",
"We should mention in the Conll2003 dataset card that these lines have been removed indeed.\r\n\r\nIf some users are interested in using these lines (maybe to recombine documents ?) then we can add a parameter to the conll2003 dataset to include them.\r\n\r\nBut IMO the default config should stay the current one (without the `-DOCSTART-` stuff), so that you can directly train NER models without additional preprocessing. Let me know what you think",
"@lhoestq Yes, I agree adding a small note should be sufficient.\r\n\r\nCurrently, NLTK's `ConllCorpusReader` ignores the `-DOCSTART-` lines so I think it's ok if we do the same. If there is an interest in the future to use these lines, then we can include them.",
"I added a mention of this in conll2003's dataset card:\r\nhttps://github.com/huggingface/datasets/blob/fc9796920da88486c3b97690969aabf03d6b4088/datasets/conll2003/README.md#conll2003\r\n\r\nEdit: just saw your PR @mariosasko (noticed it too late ^^)\r\nLet me take a look at it :)"
] | 2021-03-04T04:41:34
| 2022-10-05T13:13:26
| 2022-10-05T13:13:26
|
NONE
| null |
Thanks for the dataset sharing! But when I use conll-2003, I meet some questions.
The statistics of conll-2003 in this repo is :
\#train 14041 \#dev 3250 \#test 3453
While the official statistics is:
\#train 14987 \#dev 3466 \#test 3684
Wish for your reply~
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1983/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1983/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1982
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1982/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1982/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1982/events
|
https://github.com/huggingface/datasets/pull/1982
| 821,448,791
|
MDExOlB1bGxSZXF1ZXN0NTg0MjM2NzQ0
| 1,982
|
Fix NestedDataStructure.data for empty dict
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-03T20:16:51
| 2021-03-04T16:46:04
| 2021-03-03T22:48:36
|
MEMBER
| null |
Fix #1981
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1982/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1982/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1982",
"html_url": "https://github.com/huggingface/datasets/pull/1982",
"diff_url": "https://github.com/huggingface/datasets/pull/1982.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1982.patch",
"merged_at": "2021-03-03T22:48:36"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1981
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1981/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1981/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1981/events
|
https://github.com/huggingface/datasets/issues/1981
| 821,411,109
|
MDU6SXNzdWU4MjE0MTExMDk=
| 1,981
|
wmt datasets fail to load
|
{
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"repos_url": "https://api.github.com/users/stas00/repos",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"@stas00 Mea culpa... May I fix this tomorrow morning?",
"yes, of course, I reverted to the version before that and it works ;)\r\n\r\nbut since a new release was just made you will probably need to make a hotfix.\r\n\r\nand add the wmt to the tests?",
"Sure, I will implement a regression test!",
"@stas00 it is fixed. @lhoestq are you releasing the hot fix or would you prefer me to do it?",
"I'll do a patch release for this issue early tomorrow.\r\n\r\nAnd yes we absolutly need tests for the wmt datasets: The missing tests for wmt are an artifact from the early development of the lib but now we have tools to generate automatically the dummy data used for tests :)",
"still facing the same issue or similar:\r\nfrom datasets import load_dataset\r\nwtm14_test = load_dataset('wmt14',\"de-en\",cache_dir='./datasets')\r\n\r\n~.cache\\huggingface\\modules\\datasets_modules\\datasets\\wmt14\\43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e\\wmt_utils.py in _split_generators(self, dl_manager)\r\n758 # Extract manually downloaded files.\r\n759 manual_files = dl_manager.extract(manual_paths_dict)\r\n--> 760 extraction_map = dict(downloaded_files, **manual_files)\r\n761\r\n762 for language in self.config.language_pair:\r\n\r\nTypeError: type object argument after ** must be a mapping, not list"
] | 2021-03-03T19:21:39
| 2021-03-04T14:16:47
| 2021-03-03T22:48:36
|
MEMBER
| null |
on master:
```
python -c 'from datasets import load_dataset; load_dataset("wmt14", "de-en")'
Downloading and preparing dataset wmt14/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt14/de-en/1.0.0/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e...
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset
builder_instance.download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 578, in download_and_prepare
self._download_and_prepare(
File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 634, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt14/43e717d978d2261502b0194999583acb874ba73b0f4aed0ada2889d1bb00f36e/wmt_utils.py", line 760, in _split_generators
extraction_map = dict(downloaded_files, **manual_files)
```
it worked fine recently. same problem if I try wmt16.
git bisect points to this commit from Feb 25 as the culprit https://github.com/huggingface/datasets/commit/792f1d9bb1c5361908f73e2ef7f0181b2be409fa
@albertvillanova
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1981/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1981/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1980
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1980/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1980/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1980/events
|
https://github.com/huggingface/datasets/pull/1980
| 821,312,810
|
MDExOlB1bGxSZXF1ZXN0NTg0MTI1OTUy
| 1,980
|
Loading all answers from drop
|
{
"login": "KaijuML",
"id": 25499439,
"node_id": "MDQ6VXNlcjI1NDk5NDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/25499439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/KaijuML",
"html_url": "https://github.com/KaijuML",
"followers_url": "https://api.github.com/users/KaijuML/followers",
"following_url": "https://api.github.com/users/KaijuML/following{/other_user}",
"gists_url": "https://api.github.com/users/KaijuML/gists{/gist_id}",
"starred_url": "https://api.github.com/users/KaijuML/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/KaijuML/subscriptions",
"organizations_url": "https://api.github.com/users/KaijuML/orgs",
"repos_url": "https://api.github.com/users/KaijuML/repos",
"events_url": "https://api.github.com/users/KaijuML/events{/privacy}",
"received_events_url": "https://api.github.com/users/KaijuML/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-03T17:13:07
| 2021-03-15T11:27:26
| 2021-03-15T11:27:26
|
CONTRIBUTOR
| null |
Hello all,
I propose this change to the DROP loading script so that all answers are loaded no matter their type. Currently, only "span" answers are loaded, which excludes a significant amount of answers from drop (i.e. "number" and "date").
I updated the script with the version I use for my work. However, I couldn't find a way to verify that all is working when integrated with the datasets repo, since the `load_dataset` method seems to always download the script from github and not local files.
Note that 9 items from the train set have no answers, as well as 1 from the validation set. The script I propose simply do not load them.
Let me know if there is anything else I can do,
Clément
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1980/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1980/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1980",
"html_url": "https://github.com/huggingface/datasets/pull/1980",
"diff_url": "https://github.com/huggingface/datasets/pull/1980.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1980.patch",
"merged_at": "2021-03-15T11:27:26"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1979
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1979/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1979/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1979/events
|
https://github.com/huggingface/datasets/pull/1979
| 820,977,853
|
MDExOlB1bGxSZXF1ZXN0NTgzODQ3MTk3
| 1,979
|
Add article_id and process test set template for semeval 2020 task 11…
|
{
"login": "hemildesai",
"id": 8195444,
"node_id": "MDQ6VXNlcjgxOTU0NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hemildesai",
"html_url": "https://github.com/hemildesai",
"followers_url": "https://api.github.com/users/hemildesai/followers",
"following_url": "https://api.github.com/users/hemildesai/following{/other_user}",
"gists_url": "https://api.github.com/users/hemildesai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hemildesai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hemildesai/subscriptions",
"organizations_url": "https://api.github.com/users/hemildesai/orgs",
"repos_url": "https://api.github.com/users/hemildesai/repos",
"events_url": "https://api.github.com/users/hemildesai/events{/privacy}",
"received_events_url": "https://api.github.com/users/hemildesai/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-03T10:34:32
| 2021-03-13T10:59:40
| 2021-03-12T13:10:50
|
CONTRIBUTOR
| null |
… dataset
- `article_id` is needed to create the submission file for the task at https://propaganda.qcri.org/semeval2020-task11/
- The `technique classification` task provides the span indices in a template for the test set that is necessary to complete the task. This PR implements processing of that template for the dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1979/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1979/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1979",
"html_url": "https://github.com/huggingface/datasets/pull/1979",
"diff_url": "https://github.com/huggingface/datasets/pull/1979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1979.patch",
"merged_at": "2021-03-12T13:10:50"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1978
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1978/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1978/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1978/events
|
https://github.com/huggingface/datasets/pull/1978
| 820,956,806
|
MDExOlB1bGxSZXF1ZXN0NTgzODI5Njgz
| 1,978
|
Adding ro sts dataset
|
{
"login": "lorinczb",
"id": 36982089,
"node_id": "MDQ6VXNlcjM2OTgyMDg5",
"avatar_url": "https://avatars.githubusercontent.com/u/36982089?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lorinczb",
"html_url": "https://github.com/lorinczb",
"followers_url": "https://api.github.com/users/lorinczb/followers",
"following_url": "https://api.github.com/users/lorinczb/following{/other_user}",
"gists_url": "https://api.github.com/users/lorinczb/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lorinczb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lorinczb/subscriptions",
"organizations_url": "https://api.github.com/users/lorinczb/orgs",
"repos_url": "https://api.github.com/users/lorinczb/repos",
"events_url": "https://api.github.com/users/lorinczb/events{/privacy}",
"received_events_url": "https://api.github.com/users/lorinczb/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-03T10:08:53
| 2021-03-05T10:00:14
| 2021-03-05T09:33:55
|
CONTRIBUTOR
| null |
Adding [RO-STS](https://github.com/dumitrescustefan/RO-STS) dataset
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1978/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1978/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1978",
"html_url": "https://github.com/huggingface/datasets/pull/1978",
"diff_url": "https://github.com/huggingface/datasets/pull/1978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1978.patch",
"merged_at": "2021-03-05T09:33:55"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1976
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1976/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1976/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1976/events
|
https://github.com/huggingface/datasets/pull/1976
| 820,228,538
|
MDExOlB1bGxSZXF1ZXN0NTgzMjA3NDI4
| 1,976
|
Add datasets full offline mode with HF_DATASETS_OFFLINE
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-02T17:26:59
| 2021-03-03T15:45:31
| 2021-03-03T15:45:30
|
MEMBER
| null |
Add the HF_DATASETS_OFFLINE environment variable for users who want to use `datasets` offline without having to wait for the network timeouts/retries to happen. This was requested in https://github.com/huggingface/datasets/issues/1939
cc @stas00
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1976/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1976/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1976",
"html_url": "https://github.com/huggingface/datasets/pull/1976",
"diff_url": "https://github.com/huggingface/datasets/pull/1976.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1976.patch",
"merged_at": "2021-03-03T15:45:30"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1975
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1975/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1975/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1975/events
|
https://github.com/huggingface/datasets/pull/1975
| 820,205,485
|
MDExOlB1bGxSZXF1ZXN0NTgzMTg4NjM3
| 1,975
|
Fix flake8
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-02T16:59:13
| 2021-03-04T10:43:22
| 2021-03-04T10:43:22
|
MEMBER
| null |
Fix flake8 style.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1975/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1975/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1975",
"html_url": "https://github.com/huggingface/datasets/pull/1975",
"diff_url": "https://github.com/huggingface/datasets/pull/1975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1975.patch",
"merged_at": "2021-03-04T10:43:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1974
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1974/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1974/events
|
https://github.com/huggingface/datasets/pull/1974
| 820,122,223
|
MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0
| 1,974
|
feat(docs): navigate with left/right arrow keys
|
{
"login": "ydcjeff",
"id": 32727188,
"node_id": "MDQ6VXNlcjMyNzI3MTg4",
"avatar_url": "https://avatars.githubusercontent.com/u/32727188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ydcjeff",
"html_url": "https://github.com/ydcjeff",
"followers_url": "https://api.github.com/users/ydcjeff/followers",
"following_url": "https://api.github.com/users/ydcjeff/following{/other_user}",
"gists_url": "https://api.github.com/users/ydcjeff/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ydcjeff/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ydcjeff/subscriptions",
"organizations_url": "https://api.github.com/users/ydcjeff/orgs",
"repos_url": "https://api.github.com/users/ydcjeff/repos",
"events_url": "https://api.github.com/users/ydcjeff/events{/privacy}",
"received_events_url": "https://api.github.com/users/ydcjeff/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-02T15:24:50
| 2021-03-04T10:44:12
| 2021-03-04T10:42:48
|
NONE
| null |
Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.
More info : https://github.com/sphinx-doc/sphinx/pull/2064
You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1974/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1974",
"html_url": "https://github.com/huggingface/datasets/pull/1974",
"diff_url": "https://github.com/huggingface/datasets/pull/1974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1974.patch",
"merged_at": "2021-03-04T10:42:48"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1973
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1973/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1973/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1973/events
|
https://github.com/huggingface/datasets/issues/1973
| 820,077,312
|
MDU6SXNzdWU4MjAwNzczMTI=
| 1,973
|
Question: what gets stored in the datasets cache and why is it so huge?
|
{
"login": "ioana-blue",
"id": 17202292,
"node_id": "MDQ6VXNlcjE3MjAyMjky",
"avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ioana-blue",
"html_url": "https://github.com/ioana-blue",
"followers_url": "https://api.github.com/users/ioana-blue/followers",
"following_url": "https://api.github.com/users/ioana-blue/following{/other_user}",
"gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions",
"organizations_url": "https://api.github.com/users/ioana-blue/orgs",
"repos_url": "https://api.github.com/users/ioana-blue/repos",
"events_url": "https://api.github.com/users/ioana-blue/events{/privacy}",
"received_events_url": "https://api.github.com/users/ioana-blue/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Echo'ing this observation: I have a few datasets in the neighborhood of 2GB CSVs uncompressed, and when I use something like `Dataset.save_to_disk()` it's ~18GB on disk.\r\n\r\nIf this is unexpected behavior, would be happy to help run debugging as needed.",
"Thanks @ioana-blue for pointing out this problem (and thanks also @justin-yan). You are right that current implementation of the datasets caching files take too much memory. We are definitely changing this and optimizing the defaults, so that the file sizes are considerably reduced. I will come back to you as soon as this is fixed.",
"Thank you! Also I noticed that the files don't seem to be cleaned after the jobs finish. Last night I had only 3 jobs running, but the cache was still at 180GB. ",
"And to clarify, it's not memory, it's disk space. Thank you!",
"Hi ! As Albert said they can sometimes take more space that expected but we'll fix that soon.\r\n\r\nAlso, to give more details about caching: computations on a dataset are cached by default so that you don't have to recompute them the next time you run them.\r\n\r\nSo by default the cache files stay on your disk when you job is finished (so that if you re-execute it, it will be reloaded from the cache).\r\nFeel free to clear your cache after your job has finished, or disable caching using\r\n```python\r\nimport datasets\r\n\r\ndatasets.set_caching_enabled(False)\r\n```",
"Thanks for the tip, this is useful. ",
"Hi @ioana-blue, we have optimized Datasets' disk usage in the latest release v1.5.\r\n\r\nFeel free to update your Datasets version\r\n```shell\r\npip install -U datasets\r\n```\r\nand see if it better suits your needs.",
"Thank you!"
] | 2021-03-02T14:35:53
| 2021-03-30T14:03:59
| 2021-03-16T09:44:00
|
NONE
| null |
I'm running several training jobs (around 10) with a relatively large dataset (3M samples). The datasets cache reached 178G and it seems really large. What is it stored in there and why is it so large? I don't think I noticed this problem before and seems to be related to the new version of the datasets library. Any insight? Thank you!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1973/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1973/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1972
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1972/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1972/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1972/events
|
https://github.com/huggingface/datasets/issues/1972
| 819,752,761
|
MDU6SXNzdWU4MTk3NTI3NjE=
| 1,972
|
'Dataset' object has no attribute 'rename_column'
|
{
"login": "farooqzaman1",
"id": 23195502,
"node_id": "MDQ6VXNlcjIzMTk1NTAy",
"avatar_url": "https://avatars.githubusercontent.com/u/23195502?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/farooqzaman1",
"html_url": "https://github.com/farooqzaman1",
"followers_url": "https://api.github.com/users/farooqzaman1/followers",
"following_url": "https://api.github.com/users/farooqzaman1/following{/other_user}",
"gists_url": "https://api.github.com/users/farooqzaman1/gists{/gist_id}",
"starred_url": "https://api.github.com/users/farooqzaman1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/farooqzaman1/subscriptions",
"organizations_url": "https://api.github.com/users/farooqzaman1/orgs",
"repos_url": "https://api.github.com/users/farooqzaman1/repos",
"events_url": "https://api.github.com/users/farooqzaman1/events{/privacy}",
"received_events_url": "https://api.github.com/users/farooqzaman1/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! `rename_column` has been added recently and will be available in the next release"
] | 2021-03-02T08:01:49
| 2022-06-01T16:08:47
| 2022-06-01T16:08:47
|
NONE
| null |
'Dataset' object has no attribute 'rename_column'
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1972/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1972/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1971
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1971/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1971/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1971/events
|
https://github.com/huggingface/datasets/pull/1971
| 819,714,231
|
MDExOlB1bGxSZXF1ZXN0NTgyNzgyNTU0
| 1,971
|
Fix ArrowWriter closes stream at exit
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-02T07:12:34
| 2021-03-10T16:36:57
| 2021-03-10T16:36:57
|
MEMBER
| null |
Current implementation of ArrowWriter does not properly release the `stream` resource (by closing it) if its `finalize()` method is not called and/or an Exception is raised before/during the call to its `finalize()` method.
Therefore, ArrowWriter should be used as a context manager that properly closes its `stream` resource at exit.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1971/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1971/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1971",
"html_url": "https://github.com/huggingface/datasets/pull/1971",
"diff_url": "https://github.com/huggingface/datasets/pull/1971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1971.patch",
"merged_at": "2021-03-10T16:36:56"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1970
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1970/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1970/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1970/events
|
https://github.com/huggingface/datasets/pull/1970
| 819,500,620
|
MDExOlB1bGxSZXF1ZXN0NTgyNjAzMzEw
| 1,970
|
Fixing the URL filtering for bad MLSUM examples in GEM
|
{
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yjernite/followers",
"following_url": "https://api.github.com/users/yjernite/following{/other_user}",
"gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yjernite/subscriptions",
"organizations_url": "https://api.github.com/users/yjernite/orgs",
"repos_url": "https://api.github.com/users/yjernite/repos",
"events_url": "https://api.github.com/users/yjernite/events{/privacy}",
"received_events_url": "https://api.github.com/users/yjernite/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-02T01:22:58
| 2021-03-02T03:19:06
| 2021-03-02T02:01:33
|
MEMBER
| null |
This updates the code and metadata to use the updated `gem_mlsum_bad_ids_fixed.json` file provided by @juand-r
cc @sebastianGehrmann
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1970/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1970/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1970",
"html_url": "https://github.com/huggingface/datasets/pull/1970",
"diff_url": "https://github.com/huggingface/datasets/pull/1970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1970.patch",
"merged_at": "2021-03-02T02:01:33"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1967
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1967/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1967/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1967/events
|
https://github.com/huggingface/datasets/pull/1967
| 819,129,568
|
MDExOlB1bGxSZXF1ZXN0NTgyMjc5OTEx
| 1,967
|
Add Turkish News Category Dataset - 270K - Lite Version
|
{
"login": "yavuzKomecoglu",
"id": 5150963,
"node_id": "MDQ6VXNlcjUxNTA5NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5150963?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yavuzKomecoglu",
"html_url": "https://github.com/yavuzKomecoglu",
"followers_url": "https://api.github.com/users/yavuzKomecoglu/followers",
"following_url": "https://api.github.com/users/yavuzKomecoglu/following{/other_user}",
"gists_url": "https://api.github.com/users/yavuzKomecoglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yavuzKomecoglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yavuzKomecoglu/subscriptions",
"organizations_url": "https://api.github.com/users/yavuzKomecoglu/orgs",
"repos_url": "https://api.github.com/users/yavuzKomecoglu/repos",
"events_url": "https://api.github.com/users/yavuzKomecoglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/yavuzKomecoglu/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-01T18:21:59
| 2021-03-02T17:25:00
| 2021-03-02T17:25:00
|
CONTRIBUTOR
| null |
This PR adds the Turkish News Categories Dataset (270K - Lite Version) dataset which is a text classification dataset by me, @basakbuluz and @serdarakyol.
This dataset contains the same news from the current [interpress_news_category_tr dataset](https://huggingface.co/datasets/interpress_news_category_tr) but contains less information, OCR errors are reduced, can be easily separated, and can be divided into 10 classes ("kültürsanat", "ekonomi", "siyaset", "eğitim", "dünya", "spor", "teknoloji", "magazin", "sağlık", "gündem") were rearranged.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1967/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1967/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1967",
"html_url": "https://github.com/huggingface/datasets/pull/1967",
"diff_url": "https://github.com/huggingface/datasets/pull/1967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1967.patch",
"merged_at": "2021-03-02T17:25:00"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1966
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1966/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1966/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1966/events
|
https://github.com/huggingface/datasets/pull/1966
| 819,101,253
|
MDExOlB1bGxSZXF1ZXN0NTgyMjU2MzE0
| 1,966
|
Fix metrics collision in separate multiprocessed experiments
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-03-01T17:45:18
| 2021-03-02T13:05:45
| 2021-03-02T13:05:44
|
MEMBER
| null |
As noticed in #1942 , there's a issue with locks if you run multiple separate evaluation experiments in a multiprocessed setup.
Indeed there is a time span in Metric._finalize() where the process 0 loses its lock before re-acquiring it. This is bad since the lock of the process 0 tells the other process that the corresponding cache file is available for writing/reading/deleting: we end up having one metric cache that collides with another one. This can raise FileNotFound errors when a metric tries to read the cache file and if the second conflicting metric deleted it.
To fix that I made sure that the lock file of the process 0 stays acquired from the cache file creation to the end of the metric computation. This way the other metrics can simply sample a new hashing name in order to avoid the collision.
Finally I added missing tests for separate experiments in distributed setup.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1966/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1966/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1966",
"html_url": "https://github.com/huggingface/datasets/pull/1966",
"diff_url": "https://github.com/huggingface/datasets/pull/1966.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1966.patch",
"merged_at": "2021-03-02T13:05:44"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1965
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1965/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1965/events
|
https://github.com/huggingface/datasets/issues/1965
| 818,833,460
|
MDU6SXNzdWU4MTg4MzM0NjA=
| 1,965
|
Can we parallelized the add_faiss_index process over dataset shards ?
|
{
"login": "shamanez",
"id": 16892570,
"node_id": "MDQ6VXNlcjE2ODkyNTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shamanez",
"html_url": "https://github.com/shamanez",
"followers_url": "https://api.github.com/users/shamanez/followers",
"following_url": "https://api.github.com/users/shamanez/following{/other_user}",
"gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/shamanez/subscriptions",
"organizations_url": "https://api.github.com/users/shamanez/orgs",
"repos_url": "https://api.github.com/users/shamanez/repos",
"events_url": "https://api.github.com/users/shamanez/events{/privacy}",
"received_events_url": "https://api.github.com/users/shamanez/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works using multithreading to parallelize the workload over your different CPU cores. You can find more info [here](https://github.com/facebookresearch/faiss/wiki/Threads-and-asynchronous-calls#internal-threading)\r\nSo I feel like the gains we would get by implementing a parallel `add_faiss_index` would not be that important, but let me know what you think.\r\n",
"Actually, you are right. I also had the same idea. I am trying this in the context of end-ton-end retrieval training in RAG. So far I have parallelized the embedding re-computation within the training loop by using datasets shards. \r\n\r\nThen I was thinking of can I calculate the indexes for each shard and combined them with **concatenate** before I save.",
"@lhoestq As you mentioned faiss is already using multiprocessing. I tried to do the add_index with faiss for a dataset object inside a RAY actor and the process became very slow... if fact it takes so much time. It is because a ray actor comes with a single CPU core unless we assign it more. I also tried assigning more cores but still running add_index in the main process is very fast. "
] | 2021-03-01T12:47:34
| 2021-03-04T19:40:56
| 2021-03-04T19:40:42
|
NONE
| null |
I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.
@lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1965/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1964
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1964/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1964/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1964/events
|
https://github.com/huggingface/datasets/issues/1964
| 818,624,864
|
MDU6SXNzdWU4MTg2MjQ4NjQ=
| 1,964
|
Datasets.py function load_dataset does not match squad dataset
|
{
"login": "LeopoldACC",
"id": 44536699,
"node_id": "MDQ6VXNlcjQ0NTM2Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/44536699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LeopoldACC",
"html_url": "https://github.com/LeopoldACC",
"followers_url": "https://api.github.com/users/LeopoldACC/followers",
"following_url": "https://api.github.com/users/LeopoldACC/following{/other_user}",
"gists_url": "https://api.github.com/users/LeopoldACC/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LeopoldACC/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LeopoldACC/subscriptions",
"organizations_url": "https://api.github.com/users/LeopoldACC/orgs",
"repos_url": "https://api.github.com/users/LeopoldACC/repos",
"events_url": "https://api.github.com/users/LeopoldACC/events{/privacy}",
"received_events_url": "https://api.github.com/users/LeopoldACC/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi !\r\n\r\nTo fix 1, an you try to run this code ?\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"squad\", download_mode=\"force_redownload\")\r\n```\r\nMaybe the file your downloaded was corrupted, in this case redownloading this way should fix your issue 1.\r\n\r\nRegarding your 2nd point, you're right that loading the raw json this way doesn't give you a dataset with the column \"context\", \"question\" and \"answers\". Indeed the squad format is a very nested format so you have to preprocess the data. You can do it this way:\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n out = {\"context\": [], \"question\": [], \"answers\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n return out\r\n\r\ndatasets = load_dataset(extension, data_files=data_files, field=\"data\")\r\ncolumn_names = datasets[\"train\"].column_names\r\n\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n```\r\n\r\nHope that helps :)",
"Thks for quickly answering!\r\n### 1 I try the first way,but seems not work \r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 503, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 218, in main\r\n datasets = load_dataset(data_args.dataset_name, download_mode=\"force_redownload\")\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py\", line 746, in load_dataset\r\n use_auth_token=use_auth_token,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 573, in download_and_prepare\r\n dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py\", line 633, in _download_and_prepare\r\n self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), \"dataset source files\"\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py\", line 39, in verify_checksums\r\n raise NonMatchingChecksumError(error_msg + str(bad_urls))\r\ndatasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:\r\n['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']\r\n```\r\n### 2 I try the second way,and run the examples/question-answering/run_qa.py,it lead to another bug orz..\r\n```\r\nTraceback (most recent call last):\r\n File \"examples/question-answering/run_qa.py\", line 523, in <module>\r\n main()\r\n File \"examples/question-answering/run_qa.py\", line 379, in main\r\n load_from_cache_file=not data_args.overwrite_cache,\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1120, in map\r\n update_data = does_function_return_dict(test_inputs, test_indices)\r\n File \"/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/arrow_dataset.py\", line 1091, in does_function_return_dict\r\n function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)\r\n File \"examples/question-answering/run_qa.py\", line 339, in prepare_train_features\r\n if len(answers[\"answer_start\"]) == 0:\r\nTypeError: list indices must be integers or slices, not str\r\n```\r\n## may be the function prepare_train_features in run_qa.py need to fix,I think is that the prep\r\n```python\r\nfor i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n print(examples,answers)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[\"answer_start\"][0]\r\n end_char = start_char + len(answers[\"text\"][0])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n\r\n return tokenized_examples\r\n``` ",
"## I have fixed it, @lhoestq \r\n### the first section change as you said and add [\"id\"]\r\n```python\r\ndef process_squad(examples):\r\n \"\"\"\r\n Process a dataset in the squad format with columns \"title\" and \"paragraphs\"\r\n to return the dataset with columns \"context\", \"question\" and \"answers\".\r\n \"\"\"\r\n # print(examples)\r\n out = {\"context\": [], \"question\": [], \"answers\":[],\"id\":[]} \r\n for paragraphs in examples[\"paragraphs\"]: \r\n for paragraph in paragraphs: \r\n for qa in paragraph[\"qas\"]: \r\n answers = [{\"answer_start\": answer[\"answer_start\"], \"text\": answer[\"text\"].strip()} for answer in qa[\"answers\"]] \r\n out[\"context\"].append(paragraph[\"context\"].strip()) \r\n out[\"question\"].append(qa[\"question\"].strip()) \r\n out[\"answers\"].append(answers) \r\n out[\"id\"].append(qa[\"id\"]) \r\n return out\r\ncolumn_names = datasets[\"train\"].column_names if training_args.do_train else datasets[\"validation\"].column_names\r\n# print(datasets[\"train\"].column_names)\r\nif set(column_names) == {\"title\", \"paragraphs\"}:\r\n datasets = datasets.map(process_squad, batched=True, remove_columns=column_names)\r\n# Preprocessing the datasets.\r\n# Preprocessing is slighlty different for training and evaluation.\r\nif training_args.do_train:\r\n column_names = datasets[\"train\"].column_names\r\nelse:\r\n column_names = datasets[\"validation\"].column_names\r\n# print(column_names)\r\nquestion_column_name = \"question\" if \"question\" in column_names else column_names[0]\r\ncontext_column_name = \"context\" if \"context\" in column_names else column_names[1]\r\nanswer_column_name = \"answers\" if \"answers\" in column_names else column_names[2]\r\n```\r\n### the second section\r\n```python\r\ndef prepare_train_features(examples):\r\n # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results\r\n # in one example possible giving several features when a context is long, each of those features having a\r\n # context that overlaps a bit the context of the previous feature.\r\n tokenized_examples = tokenizer(\r\n examples[question_column_name if pad_on_right else context_column_name],\r\n examples[context_column_name if pad_on_right else question_column_name],\r\n truncation=\"only_second\" if pad_on_right else \"only_first\",\r\n max_length=data_args.max_seq_length,\r\n stride=data_args.doc_stride,\r\n return_overflowing_tokens=True,\r\n return_offsets_mapping=True,\r\n padding=\"max_length\" if data_args.pad_to_max_length else False,\r\n )\r\n\r\n # Since one example might give us several features if it has a long context, we need a map from a feature to\r\n # its corresponding example. This key gives us just that.\r\n sample_mapping = tokenized_examples.pop(\"overflow_to_sample_mapping\")\r\n # The offset mappings will give us a map from token to character position in the original context. This will\r\n # help us compute the start_positions and end_positions.\r\n offset_mapping = tokenized_examples.pop(\"offset_mapping\")\r\n\r\n # Let's label those examples!\r\n tokenized_examples[\"start_positions\"] = []\r\n tokenized_examples[\"end_positions\"] = []\r\n\r\n for i, offsets in enumerate(offset_mapping):\r\n # We will label impossible answers with the index of the CLS token.\r\n input_ids = tokenized_examples[\"input_ids\"][i]\r\n cls_index = input_ids.index(tokenizer.cls_token_id)\r\n\r\n # Grab the sequence corresponding to that example (to know what is the context and what is the question).\r\n sequence_ids = tokenized_examples.sequence_ids(i)\r\n\r\n # One example can give several spans, this is the index of the example containing this span of text.\r\n sample_index = sample_mapping[i]\r\n answers = examples[answer_column_name][sample_index]\r\n # print(examples,answers,offset_mapping,tokenized_examples)\r\n # If no answers are given, set the cls_index as answer.\r\n if len(answers) == 0:#len(answers[\"answer_start\"]) == 0:\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Start/end character index of the answer in the text.\r\n start_char = answers[0][\"answer_start\"]\r\n end_char = start_char + len(answers[0][\"text\"])\r\n\r\n # Start token index of the current span in the text.\r\n token_start_index = 0\r\n while sequence_ids[token_start_index] != (1 if pad_on_right else 0):\r\n token_start_index += 1\r\n\r\n # End token index of the current span in the text.\r\n token_end_index = len(input_ids) - 1\r\n while sequence_ids[token_end_index] != (1 if pad_on_right else 0):\r\n token_end_index -= 1\r\n\r\n # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index).\r\n if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char):\r\n tokenized_examples[\"start_positions\"].append(cls_index)\r\n tokenized_examples[\"end_positions\"].append(cls_index)\r\n else:\r\n # Otherwise move the token_start_index and token_end_index to the two ends of the answer.\r\n # Note: we could go after the last offset if the answer is the last word (edge case).\r\n while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char:\r\n token_start_index += 1\r\n tokenized_examples[\"start_positions\"].append(token_start_index - 1)\r\n while offsets[token_end_index][1] >= end_char:\r\n token_end_index -= 1\r\n tokenized_examples[\"end_positions\"].append(token_end_index + 1)\r\n return tokenized_examples\r\n```",
"I'm glad you managed to fix run_qa.py for your case :)\r\n\r\nRegarding the checksum error, I'm not able to reproduce on my side.\r\nThis errors says that the downloaded file doesn't match the expected file.\r\n\r\nCould you try running this and let me know if you get the same output as me ?\r\n```python\r\nfrom datasets.utils.info_utils import get_size_checksum_dict\r\nfrom datasets import cached_path\r\n\r\nget_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\n# {'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```",
"I run the code,and it show below:\r\n```\r\n>>> from datasets.utils.info_utils import get_size_checksum_dict\r\n>>> from datasets import cached_path\r\n>>> get_size_checksum_dict(cached_path(\"https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json\"))\r\nDownloading: 30.3MB [04:13, 120kB/s]\r\n{'num_bytes': 30288272, 'checksum': '3527663986b8295af4f7fcdff1ba1ff3f72d07d61a20f487cb238a6ef92fd955'}\r\n```",
"Alright ! So in this case redownloading the file with `download_mode=\"force_redownload\"` should fix it. Can you try using `download_mode=\"force_redownload\"` again ?\r\n\r\nNot sure why it didn't work for you the first time though :/"
] | 2021-03-01T08:41:31
| 2022-10-05T13:09:47
| 2022-10-05T13:09:47
|
NONE
| null |
### 1 When I try to train lxmert,and follow the code in README that --dataset name:
```shell
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
the bug is that:
```
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.75 MiB, post-processed: Unknown size, total: 119.27 MiB) to /home2/zhenggo1/.cache/huggingface/datasets/squad/plain_text/1.0.0/4c81550d83a2ac7c7ce23783bd8ff36642800e6633c1f18417fb58c3ff50cdd7...
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 217, in main
datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name)
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset
use_auth_token=use_auth_token,
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/builder.py", line 633, in _download_and_prepare
self.info.download_checksums, dl_manager.get_recorded_sizes_checksums(), "dataset source files"
File "/home2/zhenggo1/anaconda3/envs/lpot/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json']
```
And I try to find the [checksum link](https://github.com/huggingface/datasets/blob/master/datasets/squad/dataset_infos.json)
,is the problem plain_text do not have a checksum?
### 2 When I try to train lxmert,and use local dataset:
```
python examples/question-answering/run_qa.py --model_name_or_path unc-nlp/lxmert-base-uncased --train_file $SQUAD_DIR/train-v1.1.json --validation_file $SQUAD_DIR/dev-v1.1.json --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /home2/zhenggo1/checkpoint/lxmert_squad
```
The bug is that
```
['title', 'paragraphs']
Traceback (most recent call last):
File "examples/question-answering/run_qa.py", line 501, in <module>
main()
File "examples/question-answering/run_qa.py", line 273, in main
answer_column_name = "answers" if "answers" in column_names else column_names[2]
IndexError: list index out of range
```
I print the answer_column_name and find that local squad dataset need the package datasets to preprocessing so that the code below can work:
```
if training_args.do_train:
column_names = datasets["train"].column_names
else:
column_names = datasets["validation"].column_names
print(datasets["train"].column_names)
question_column_name = "question" if "question" in column_names else column_names[0]
context_column_name = "context" if "context" in column_names else column_names[1]
answer_column_name = "answers" if "answers" in column_names else column_names[2]
```
## Please tell me how to fix the bug,thks a lot!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1964/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1964/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1963
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1963/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1963/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1963/events
|
https://github.com/huggingface/datasets/issues/1963
| 818,289,967
|
MDU6SXNzdWU4MTgyODk5Njc=
| 1,963
|
bug in SNLI dataset
|
{
"login": "dorost1234",
"id": 79165106,
"node_id": "MDQ6VXNlcjc5MTY1MTA2",
"avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorost1234",
"html_url": "https://github.com/dorost1234",
"followers_url": "https://api.github.com/users/dorost1234/followers",
"following_url": "https://api.github.com/users/dorost1234/following{/other_user}",
"gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions",
"organizations_url": "https://api.github.com/users/dorost1234/orgs",
"repos_url": "https://api.github.com/users/dorost1234/repos",
"events_url": "https://api.github.com/users/dorost1234/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorost1234/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! The labels -1 correspond to the examples without gold labels in the original snli dataset.\r\nFeel free to remove these examples if you don't need them by using\r\n```python\r\ndata = data.filter(lambda x: x[\"label\"] != -1)\r\n```"
] | 2021-02-28T19:36:20
| 2022-10-05T13:13:46
| 2022-10-05T13:13:46
|
NONE
| null |
Hi
There is label of -1 in train set of SNLI dataset, please find the code below:
```
import numpy as np
import datasets
data = datasets.load_dataset("snli")["train"]
labels = []
for d in data:
labels.append(d["label"])
print(np.unique(labels))
```
and results:
`[-1 0 1 2]`
version of datasets used:
`datasets 1.2.1 <pip>
`
thanks for your help. @lhoestq
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1963/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1963/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/1962
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1962/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1962/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1962/events
|
https://github.com/huggingface/datasets/pull/1962
| 818,089,156
|
MDExOlB1bGxSZXF1ZXN0NTgxNDQwNzM4
| 1,962
|
Fix unused arguments
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-28T02:47:07
| 2021-03-11T02:18:17
| 2021-03-03T16:37:50
|
CONTRIBUTOR
| null |
Noticed some args in the codebase are not used, so managed to find all such occurrences with Pylance and fix them.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1962/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1962/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1962",
"html_url": "https://github.com/huggingface/datasets/pull/1962",
"diff_url": "https://github.com/huggingface/datasets/pull/1962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1962.patch",
"merged_at": "2021-03-03T16:37:50"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1961
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1961/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1961/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1961/events
|
https://github.com/huggingface/datasets/pull/1961
| 818,077,947
|
MDExOlB1bGxSZXF1ZXN0NTgxNDM3NDI0
| 1,961
|
Add sst dataset
|
{
"login": "patpizio",
"id": 15801338,
"node_id": "MDQ6VXNlcjE1ODAxMzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/15801338?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patpizio",
"html_url": "https://github.com/patpizio",
"followers_url": "https://api.github.com/users/patpizio/followers",
"following_url": "https://api.github.com/users/patpizio/following{/other_user}",
"gists_url": "https://api.github.com/users/patpizio/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patpizio/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patpizio/subscriptions",
"organizations_url": "https://api.github.com/users/patpizio/orgs",
"repos_url": "https://api.github.com/users/patpizio/repos",
"events_url": "https://api.github.com/users/patpizio/events{/privacy}",
"received_events_url": "https://api.github.com/users/patpizio/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-28T02:08:29
| 2021-03-04T10:38:53
| 2021-03-04T10:38:53
|
CONTRIBUTOR
| null |
Related to #1934—Add the Stanford Sentiment Treebank dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1961/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1961/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1961",
"html_url": "https://github.com/huggingface/datasets/pull/1961",
"diff_url": "https://github.com/huggingface/datasets/pull/1961.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1961.patch",
"merged_at": "2021-03-04T10:38:53"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/1960
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/1960/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/1960/comments
|
https://api.github.com/repos/huggingface/datasets/issues/1960/events
|
https://github.com/huggingface/datasets/pull/1960
| 818,073,154
|
MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4
| 1,960
|
Allow stateful function in dataset.map
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2021-02-28T01:29:05
| 2021-03-23T15:26:49
| 2021-03-23T15:26:49
|
CONTRIBUTOR
| null |
Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example.
Fixes #1940
@lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/1960/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/1960/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1960",
"html_url": "https://github.com/huggingface/datasets/pull/1960",
"diff_url": "https://github.com/huggingface/datasets/pull/1960.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1960.patch",
"merged_at": "2021-03-23T15:26:49"
}
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.