url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1787 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1787/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1787/comments | https://api.github.com/repos/huggingface/datasets/issues/1787/events | https://github.com/huggingface/datasets/pull/1787 | 795,485,842 | MDExOlB1bGxSZXF1ZXN0NTYyODI1NTI3 | 1,787 | Update the CommonGen citation information | [] | closed | false | null | 0 | 2021-01-27T22:12:47Z | 2021-01-28T13:56:29Z | 2021-01-28T13:56:29Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1787/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1787/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"merged_at": "2021-01-28T13:56:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1787"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/2716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2716/comments | https://api.github.com/repos/huggingface/datasets/issues/2716/events | https://github.com/huggingface/datasets/issues/2716 | 952,902,778 | MDU6SXNzdWU5NTI5MDI3Nzg= | 2,716 | Calling shuffle on IterableDataset will disable batching in case any functions were mapped | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-07-26T13:24:59Z | 2021-07-26T18:04:43Z | 2021-07-26T18:04:43Z | null | When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`
I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/huggingface/datasets/blob/d25a0bf94d9f9a9aa6cabdf5b450b9c327d19729/src/datasets/iterable_dataset.py#L197) here as it is
`self.ex_iterable.shuffle_data_sources(seed), function=self.function, batch_size=self.batch_size`, as one can see it is missing batched argument, which means that the iterator fallsback to default constructor value, which in this case is `False`.
To remedy the problem we can change this line to
`self.ex_iterable.shuffle_data_sources(seed), function=self.function, batched=self.batched, batch_size=self.batch_size`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2716/timeline | null | completed | null | null | false | [
"Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)",
"Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)",
"Fixed by #2717."
] |
https://api.github.com/repos/huggingface/datasets/issues/2268 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2268/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2268/comments | https://api.github.com/repos/huggingface/datasets/issues/2268/events | https://github.com/huggingface/datasets/pull/2268 | 868,773,380 | MDExOlB1bGxSZXF1ZXN0NjI0MjQyODg1 | 2,268 | Don't use pyarrow 4.0.0 since it segfaults when casting a sliced ListArray of integers | [] | closed | false | null | 3 | 2021-04-27T11:58:28Z | 2021-06-12T12:44:49Z | 2021-04-27T13:43:20Z | null | This test `tests/test_table.py::test_concatenation_table_cast` segfaults with the latest update of pyarrow 4.0.0.
Setting `pyarrow<4.0.0` for now. I'll open an issue on JIRA once I know more about the origin of the issue | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2268/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2268/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2268.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2268",
"merged_at": "2021-04-27T13:43:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2268.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2268"
} | true | [
"@lhoestq note that the segfault also occurs on Linux.",
"Created the ticket at\r\nhttps://issues.apache.org/jira/browse/ARROW-12568",
"@lhoestq the ticket you mentioned is now in state resolved. Pyarrow supports AArch64 after version 4.0.0. Because of this restriction `datasets` is not installing in AArch64 sy... |
https://api.github.com/repos/huggingface/datasets/issues/373 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/373/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/373/comments | https://api.github.com/repos/huggingface/datasets/issues/373/events | https://github.com/huggingface/datasets/issues/373 | 654,845,133 | MDU6SXNzdWU2NTQ4NDUxMzM= | 373 | Segmentation fault when loading local JSON dataset as of #372 | [] | closed | false | null | 11 | 2020-07-10T15:04:25Z | 2022-10-04T18:05:47Z | 2022-10-04T18:05:47Z | null | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
```
causes
```
Using custom data configuration default
Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0...
0 tables [00:00, ? tables/s]Segmentation fault (core dumped)
```
where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/.
This is consistent with other SQuAD-formatted JSON files.
When attempting to load the dataset again, I get the following:
```
Using custom data configuration default
Traceback (most recent call last):
File "dataloader.py", line 6, in <module>
'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data')
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset
save_infos=save_infos,
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare
with incomplete_dir(self._cache_dir) as tmp_data_dir:
File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__
return next(self.gen)
File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir
os.makedirs(tmp_dir)
File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs
mkdir(name, mode)
FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete'
```
(Not sure if you wanted this in the previous issue #369 or not as it was closed.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/373/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/373/timeline | null | completed | null | null | false | [
"I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.j... |
https://api.github.com/repos/huggingface/datasets/issues/3161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3161/comments | https://api.github.com/repos/huggingface/datasets/issues/3161/events | https://github.com/huggingface/datasets/pull/3161 | 1,035,444,292 | PR_kwDODunzps4tpCsm | 3,161 | Add riddle_sense dataset | [] | closed | false | null | 2 | 2021-10-25T18:30:56Z | 2021-11-04T14:01:15Z | 2021-11-04T14:01:15Z | null | Adding a new dataset for QA with riddles. I'm confused about the tagging process because it looks like the streamlit app loads data from the current repo, so is it something that should be done after merging or off my fork? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3161/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3161.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3161",
"merged_at": "2021-11-04T14:01:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3161.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3161"
} | true | [
"@lhoestq \r\nI address all the comments, I think. Thanks! \r\n",
"The five test fails are unrelated to this PR and fixed on master so we can ignore them"
] |
https://api.github.com/repos/huggingface/datasets/issues/3096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3096/comments | https://api.github.com/repos/huggingface/datasets/issues/3096/events | https://github.com/huggingface/datasets/pull/3096 | 1,027,535,685 | PR_kwDODunzps4tQblQ | 3,096 | Fix Audio feature mp3 resampling | [] | closed | false | null | 0 | 2021-10-15T15:05:19Z | 2021-10-15T15:38:30Z | 2021-10-15T15:38:30Z | null | Issue #3095 is related to mp3 resampling, not to `cast_column`.
This PR fixes Audio feature mp3 resampling.
Fix #3095. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3096/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3096.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3096",
"merged_at": "2021-10-15T15:38:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3096.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3096"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2089 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2089/comments | https://api.github.com/repos/huggingface/datasets/issues/2089/events | https://github.com/huggingface/datasets/issues/2089 | 836,788,019 | MDU6SXNzdWU4MzY3ODgwMTk= | 2,089 | Add documentaton for dataset README.md files | [] | closed | false | null | 8 | 2021-03-20T11:44:38Z | 2023-07-25T16:45:38Z | 2023-07-25T16:45:37Z | null | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which values should licenses have? What do I say when it is a custom license? Should I add a link?
- how should I choose size_categories ? What are valid ranges?
- what are valid task_categories?
Thanks
Philip | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2089/timeline | null | completed | null | null | false | [
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a... |
https://api.github.com/repos/huggingface/datasets/issues/6015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6015/comments | https://api.github.com/repos/huggingface/datasets/issues/6015/events | https://github.com/huggingface/datasets/pull/6015 | 1,798,807,893 | PR_kwDODunzps5VMhgB | 6,015 | Add metadata ui screenshot in docs | [] | closed | false | null | 3 | 2023-07-11T12:16:29Z | 2023-07-11T16:07:28Z | 2023-07-11T15:56:46Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6015/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6015",
"merged_at": "2023-07-11T15:56:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6015"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/90 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/90/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/90/comments | https://api.github.com/repos/huggingface/datasets/issues/90/events | https://github.com/huggingface/datasets/pull/90 | 617,311,877 | MDExOlB1bGxSZXF1ZXN0NDE3MjUxODE0 | 90 | Add download gg drive | [] | closed | false | null | 2 | 2020-05-13T09:56:02Z | 2020-05-13T12:46:28Z | 2020-05-13T10:05:31Z | null | We can now add datasets that download from google drive | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/90/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/90/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/90.diff",
"html_url": "https://github.com/huggingface/datasets/pull/90",
"merged_at": "2020-05-13T10:05:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/90.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/90"
} | true | [
"awesome - so no manual downloaded needed here? ",
"Yes exactly. It works like a standard download"
] |
https://api.github.com/repos/huggingface/datasets/issues/5640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5640/comments | https://api.github.com/repos/huggingface/datasets/issues/5640/events | https://github.com/huggingface/datasets/pull/5640 | 1,625,896,057 | PR_kwDODunzps5MID3I | 5,640 | Less zip false positives | [] | closed | false | null | 6 | 2023-03-15T16:48:59Z | 2023-03-16T13:47:37Z | 2023-03-16T13:40:12Z | null | `zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile`
This is a known issue: https://github.com/python/cpython/issues/72680
At first I wanted to rely only on magic numbers, but then I found that someone contributed a [fix to is_zipfile](https://github.com/python/cpython/pull/5053) - do you think we should use it @albertvillanova or not ?
IMO it's ok to rely on magic numbers only for now, since in streaming mode we've had no issue checking only the magic number so far.
Close https://github.com/huggingface/datasets/issues/5639 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5640/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5640",
"merged_at": "2023-03-16T13:40:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5640"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/1362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1362/comments | https://api.github.com/repos/huggingface/datasets/issues/1362/events | https://github.com/huggingface/datasets/pull/1362 | 760,138,233 | MDExOlB1bGxSZXF1ZXN0NTM1MDIwMDAz | 1,362 | adding opus_infopankki | [] | closed | false | null | 1 | 2020-12-09T08:57:10Z | 2020-12-09T18:16:20Z | 2020-12-09T18:13:48Z | null | Adding opus_infopankki
http://opus.nlpl.eu/infopankki-v1.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1362/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1362",
"merged_at": "2020-12-09T18:13:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1362"
} | true | [
"Thanks Quentin !"
] |
https://api.github.com/repos/huggingface/datasets/issues/4859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4859/comments | https://api.github.com/repos/huggingface/datasets/issues/4859/events | https://github.com/huggingface/datasets/issues/4859 | 1,342,231,016 | I_kwDODunzps5QANHo | 4,859 | can't install using conda on Windows 10 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2022-08-17T19:57:37Z | 2022-08-17T19:57:37Z | null | null | ## Describe the bug
I wanted to install using conda or Anaconda navigator. That didn't work, so I had to install using pip.
## Steps to reproduce the bug
conda install -c huggingface -c conda-forge datasets
## Expected results
Should have indicated successful installation.
## Actual results
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
... took forever, so I cancelled it with ctrl-c
## Environment info
- `datasets` version: 2.4.0 # after installing with pip
- Platform: Windows-10-10.0.19044-SP0
- Python version: 3.9.12
- PyArrow version: 9.0.0
- Pandas version: 1.4.2
- conda version: 4.13.0
conda info
active environment : base
active env location : G:\anaconda2022
shell level : 1
user config file : C:\Users\michael\.condarc
populated config files : C:\Users\michael\.condarc
conda version : 4.13.0
conda-build version : 3.21.8
python version : 3.9.12.final.0
virtual packages : __cuda=11.1=0
__win=0=0
__archspec=1=x86_64
base environment : G:\anaconda2022 (writable)
conda av data dir : G:\anaconda2022\etc\conda
conda av metadata url : None
channel URLs : https://conda.anaconda.org/pytorch/win-64
https://conda.anaconda.org/pytorch/noarch
https://conda.anaconda.org/huggingface/win-64
https://conda.anaconda.org/huggingface/noarch
https://conda.anaconda.org/conda-forge/win-64
https://conda.anaconda.org/conda-forge/noarch
https://conda.anaconda.org/anaconda-fusion/win-64
https://conda.anaconda.org/anaconda-fusion/noarch
https://repo.anaconda.com/pkgs/main/win-64
https://repo.anaconda.com/pkgs/main/noarch
https://repo.anaconda.com/pkgs/r/win-64
https://repo.anaconda.com/pkgs/r/noarch
https://repo.anaconda.com/pkgs/msys2/win-64
https://repo.anaconda.com/pkgs/msys2/noarch
package cache : G:\anaconda2022\pkgs
C:\Users\michael\.conda\pkgs
C:\Users\michael\AppData\Local\conda\conda\pkgs
envs directories : G:\anaconda2022\envs
C:\Users\michael\.conda\envs
C:\Users\michael\AppData\Local\conda\conda\envs
platform : win-64
user-agent : conda/4.13.0 requests/2.27.1 CPython/3.9.12 Windows/10 Windows/10.0.19044
administrator : False
netrc file : None
offline mode : False
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4859/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4859/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/244/comments | https://api.github.com/repos/huggingface/datasets/issues/244/events | https://github.com/huggingface/datasets/pull/244 | 631,869,155 | MDExOlB1bGxSZXF1ZXN0NDI4NjgxMTcx | 244 | Add Allociné Dataset | [] | closed | false | null | 3 | 2020-06-05T19:19:26Z | 2020-06-11T07:47:26Z | 2020-06-11T07:47:26Z | null | This is a french binary sentiment classification dataset, which was used to train this model: https://huggingface.co/tblard/tf-allocine.
Basically, it's a french "IMDB" dataset, with more reviews.
More info on [this repo](https://github.com/TheophileBlard/french-sentiment-analysis-with-bert). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/244/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/244",
"merged_at": "2020-06-11T07:47:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/244"
} | true | [
"great work @TheophileBlard ",
"LGTM, thanks a lot for adding dummy data tests :-) Was it difficult to create the correct dummy data folder? ",
"It was pretty easy actually. Documentation is on point !"
] |
https://api.github.com/repos/huggingface/datasets/issues/939 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/939/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/939/comments | https://api.github.com/repos/huggingface/datasets/issues/939/events | https://github.com/huggingface/datasets/pull/939 | 753,965,405 | MDExOlB1bGxSZXF1ZXN0NTI5OTQwOTYz | 939 | add wisesight_sentiment | [] | closed | false | null | 4 | 2020-12-01T03:06:39Z | 2020-12-02T04:52:38Z | 2020-12-02T04:35:51Z | null | Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
Model Card:
---
YAML tags:
annotations_creators:
- expert-generated
language_creators:
- found
languages:
- th
licenses:
- cc0-1.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- sentiment-classification
---
# Dataset Card for wisesight_sentiment
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Repository:** https://github.com/PyThaiNLP/wisesight-sentiment
- **Paper:**
- **Leaderboard:** https://www.kaggle.com/c/wisesight-sentiment/
- **Point of Contact:** https://github.com/PyThaiNLP/
### Dataset Summary
Wisesight Sentiment Corpus: Social media messages in Thai language with sentiment label (positive, neutral, negative, question)
- Released to public domain under Creative Commons Zero v1.0 Universal license.
- Labels: {"pos": 0, "neu": 1, "neg": 2, "q": 3}
- Size: 26,737 messages
- Language: Central Thai
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
(Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
- More characteristics of the data can be explore [this notebook](https://github.com/PyThaiNLP/wisesight-sentiment/blob/master/exploration.ipynb)
### Supported Tasks and Leaderboards
Sentiment analysis / [Kaggle Leaderboard](https://www.kaggle.com/c/wisesight-sentiment/)
### Languages
Thai
## Dataset Structure
### Data Instances
```
{'category': 'pos', 'texts': 'น่าสนนน'}
{'category': 'neu', 'texts': 'ครับ #phithanbkk'}
{'category': 'neg', 'texts': 'ซื้อแต่ผ้าอนามัยแบบเย็นมาค่ะ แบบว่าอีห่ากูนอนไม่ได้'}
{'category': 'q', 'texts': 'มีแอลกอฮอลมั้ยคะ'}
```
### Data Fields
- `texts`: texts
- `category`: sentiment of texts ranging from `pos` (positive; 0), `neu` (neutral; 1), `neg` (negative; 2) and `q` (question; 3)
### Data Splits
| | train | valid | test |
|-----------|-------|-------|-------|
| # samples | 21628 | 2404 | 2671 |
| # neu | 11795 | 1291 | 1453 |
| # neg | 5491 | 637 | 683 |
| # pos | 3866 | 434 | 478 |
| # q | 476 | 42 | 57 |
| avg words | 27.21 | 27.18 | 27.12 |
| avg chars | 89.82 | 89.50 | 90.36 |
## Dataset Creation
### Curation Rationale
Originally, the dataset was conceived for the [In-class Kaggle Competition](https://www.kaggle.com/c/wisesight-sentiment/) at Chulalongkorn university by [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University). It has since become one of the benchmarks for sentiment analysis in Thai.
### Source Data
#### Initial Data Collection and Normalization
- Style: Informal and conversational. With some news headlines and advertisement.
- Time period: Around 2016 to early 2019. With small amount from other period.
- Domains: Mixed. Majority are consumer products and services (restaurants, cosmetics, drinks, car, hotels), with some current affairs.
- Privacy:
- Only messages that made available to the public on the internet (websites, blogs, social network sites).
- For Facebook, this means the public comments (everyone can see) that made on a public page.
- Private/protected messages and messages in groups, chat, and inbox are not included.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
- Alternations and modifications:
- Keep in mind that this corpus does not statistically represent anything in the language register.
- Large amount of messages are not in their original form. Personal data are removed or masked.
- Duplicated, leading, and trailing whitespaces are removed. Other punctuations, symbols, and emojis are kept intact.
- (Mis)spellings are kept intact.
- Messages longer than 2,000 characters are removed.
- Long non-Thai messages are removed. Duplicated message (exact match) are removed.
#### Who are the source language producers?
Social media users in Thailand
### Annotations
#### Annotation process
- Sentiment values are assigned by human annotators.
- A human annotator put his/her best effort to assign just one label, out of four, to a message.
- Agreement, enjoyment, and satisfaction are positive. Disagreement, sadness, and disappointment are negative.
- Showing interest in a topic or in a product is counted as positive. In this sense, a question about a particular product could has a positive sentiment value, if it shows the interest in the product.
- Saying that other product or service is better is counted as negative.
- General information or news title tend to be counted as neutral.
#### Who are the annotators?
Outsourced annotators hired by [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/)
### Personal and Sensitive Information
- We trying to exclude any known personally identifiable information from this data set.
- Usernames and non-public figure names are removed
- Phone numbers are masked (e.g. 088-888-8888, 09-9999-9999, 0-2222-2222)
- If you see any personal data still remain in the set, please tell us - so we can remove them.
## Considerations for Using the Data
### Social Impact of Dataset
- `wisesight_sentiment` is the first and one of the few open datasets for sentiment analysis of social media data in Thai
- There are risks of personal information that escape the anonymization process
### Discussion of Biases
- A message can be ambiguous. When possible, the judgement will be based solely on the text itself.
- In some situation, like when the context is missing, the annotator may have to rely on his/her own world knowledge and just guess.
- In some cases, the human annotator may have an access to the message's context, like an image. These additional information are not included as part of this corpus.
### Other Known Limitations
- The labels are imbalanced; over half of the texts are `neu` (neutral) whereas there are very few `q` (question).
- Misspellings in social media texts make word tokenization process for Thai difficult, thus impacting the model performance
## Additional Information
### Dataset Curators
Thanks [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp) community, [Kitsuchart Pasupa](http://www.it.kmitl.ac.th/~kitsuchart/) (Faculty of Information Technology, King Mongkut's Institute of Technology Ladkrabang), and [Ekapol Chuangsuwanich](https://www.cp.eng.chula.ac.th/en/about/faculty/ekapolc/) (Faculty of Engineering, Chulalongkorn University) for advice. The original Kaggle competition, using the first version of this corpus, can be found at https://www.kaggle.com/c/wisesight-sentiment/
### Licensing Information
- If applicable, copyright of each message content belongs to the original poster.
- **Annotation data (labels) are released to public domain.**
- [Wisesight (Thailand) Co., Ltd.](https://github.com/wisesight/) helps facilitate the annotation, but does not necessarily agree upon the labels made by the human annotators. This annotation is for research purpose and does not reflect the professional work that Wisesight has been done for its customers.
- The human annotator does not necessarily agree or disagree with the message. Likewise, the label he/she made to the message does not necessarily reflect his/her personal view towards the message.
### Citation Information
Please cite the following if you make use of the dataset:
Arthit Suriyawongkul, Ekapol Chuangsuwanich, Pattarawat Chormai, and Charin Polpanumas. 2019. **PyThaiNLP/wisesight-sentiment: First release.** September.
BibTeX:
```
@software{bact_2019_3457447,
author = {Suriyawongkul, Arthit and
Chuangsuwanich, Ekapol and
Chormai, Pattarawat and
Polpanumas, Charin},
title = {PyThaiNLP/wisesight-sentiment: First release},
month = sep,
year = 2019,
publisher = {Zenodo},
version = {v1.0},
doi = {10.5281/zenodo.3457447},
url = {https://doi.org/10.5281/zenodo.3457447}
}
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/939/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/939/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/939.diff",
"html_url": "https://github.com/huggingface/datasets/pull/939",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/939.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/939"
} | true | [
"@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILE... |
https://api.github.com/repos/huggingface/datasets/issues/4773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4773/comments | https://api.github.com/repos/huggingface/datasets/issues/4773/events | https://github.com/huggingface/datasets/pull/4773 | 1,322,796,721 | PR_kwDODunzps48WNV3 | 4,773 | Document loading from relative path | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 5 | 2022-07-29T23:32:21Z | 2022-08-25T18:36:45Z | 2022-08-25T18:34:23Z | null | This PR describes loading a dataset from the Hub by specifying a relative path in `data_dir` or `data_files` in `load_dataset` (see #4757). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4773/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4773/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4773.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4773",
"merged_at": "2022-08-25T18:34:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4773.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4773"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the feedback!\r\n\r\nI agree that adding it to `load_hub.mdx` is probably a bit too specific, especially for beginners reading the tutorials. Since this clarification is closely related to loading from the Hub (the only di... |
https://api.github.com/repos/huggingface/datasets/issues/1398 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1398/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1398/comments | https://api.github.com/repos/huggingface/datasets/issues/1398/events | https://github.com/huggingface/datasets/pull/1398 | 760,497,024 | MDExOlB1bGxSZXF1ZXN0NTM1MzE4NTg5 | 1,398 | Add Neural Code Search Dataset | [] | closed | false | null | 3 | 2020-12-09T16:52:16Z | 2020-12-09T18:02:27Z | 2020-12-09T18:02:27Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1398/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1398/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1398.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1398",
"merged_at": "2020-12-09T18:02:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1398.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1398"
} | true | [
"@lhoestq Refactored into new branch, please review :) ",
"The `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine",
"merging since the CI is fixed on master"
] | |
https://api.github.com/repos/huggingface/datasets/issues/1170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1170/comments | https://api.github.com/repos/huggingface/datasets/issues/1170/events | https://github.com/huggingface/datasets/pull/1170 | 757,754,378 | MDExOlB1bGxSZXF1ZXN0NTMzMDczOTU0 | 1,170 | Fix path handling for Windows | [] | closed | false | null | 1 | 2020-12-05T18:31:54Z | 2020-12-07T10:47:23Z | 2020-12-07T10:47:23Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1170/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1170.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1170",
"merged_at": "2020-12-07T10:47:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1170.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1170"
} | true | [
"@lhoestq here's the fix!"
] | |
https://api.github.com/repos/huggingface/datasets/issues/1731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1731/comments | https://api.github.com/repos/huggingface/datasets/issues/1731/events | https://github.com/huggingface/datasets/issues/1731 | 784,744,674 | MDU6SXNzdWU3ODQ3NDQ2NzQ= | 1,731 | Couldn't reach swda.py | [] | closed | false | null | 2 | 2021-01-13T02:57:40Z | 2021-01-13T11:17:40Z | 2021-01-13T11:17:40Z | null | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1731/timeline | null | completed | null | null | false | [
"Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface... |
https://api.github.com/repos/huggingface/datasets/issues/5770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5770/comments | https://api.github.com/repos/huggingface/datasets/issues/5770/events | https://github.com/huggingface/datasets/pull/5770 | 1,673,581,555 | PR_kwDODunzps5OmntV | 5,770 | Add IterableDataset.from_spark | [] | closed | false | null | 8 | 2023-04-18T17:47:53Z | 2023-05-17T14:07:32Z | 2023-05-17T14:00:38Z | null | Follow-up from https://github.com/huggingface/datasets/pull/5701
Related issue: https://github.com/huggingface/datasets/issues/5678 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5770/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5770.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5770",
"merged_at": "2023-05-17T14:00:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5770.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5770"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi again @lhoestq this is ready for review! Not sure I have permission to add people to the reviewers list...",
"Cool ! I think you can define `IterableDataset.from_spark` instead of adding `streaming=` in `Dataset.from_spark`, it ... |
https://api.github.com/repos/huggingface/datasets/issues/2366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2366/comments | https://api.github.com/repos/huggingface/datasets/issues/2366/events | https://github.com/huggingface/datasets/issues/2366 | 893,185,266 | MDU6SXNzdWU4OTMxODUyNjY= | 2,366 | Json loader fails if user-specified features don't match the json data fields order | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-05-17T10:26:08Z | 2021-06-16T10:47:49Z | 2021-06-16T10:47:49Z | null | If you do
```python
dataset = load_dataset("json", data_files=data_files, features=features)
```
Then depending on the order of the features in the json data field it fails:
```python
[...]
~/Desktop/hf/datasets/src/datasets/packaged_modules/json/json.py in _generate_tables(self, files)
94 if self.config.schema:
95 # Cast allows str <-> int/float, while parse_option explicit_schema does NOT
---> 96 pa_table = pa_table.cast(self.config.schema)
97 yield i, pa_table
[...]
ValueError: Target schema's field names are not matching the table's field names: ['tokens', 'ner_tags'], ['ner_tags', 'tokens']
```
This is because one must first re-order the columns of the table to match the `self.config.schema` before calling cast.
One way to fix the `cast` would be to replace it with:
```python
# reorder the arrays if necessary + cast to schema
# we can't simply use .cast here because we may need to change the order of the columns
pa_table = pa.Table.from_arrays([pa_table[name] for name in schema.names], schema=schema)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2366/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4293 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4293/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4293/comments | https://api.github.com/repos/huggingface/datasets/issues/4293/events | https://github.com/huggingface/datasets/pull/4293 | 1,228,815,477 | PR_kwDODunzps43dRt9 | 4,293 | Fix wrong map parameter name in cache docs | [] | closed | false | null | 1 | 2022-05-08T07:27:46Z | 2022-06-14T16:49:00Z | 2022-06-14T16:07:00Z | null | The `load_from_cache` parameter of `map` should be `load_from_cache_file`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4293/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4293/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4293.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4293",
"merged_at": "2022-06-14T16:07:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4293.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4293"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1365/comments | https://api.github.com/repos/huggingface/datasets/issues/1365/events | https://github.com/huggingface/datasets/pull/1365 | 760,188,457 | MDExOlB1bGxSZXF1ZXN0NTM1MDYxNTI2 | 1,365 | Add Mkqa dataset | [] | closed | false | null | 2 | 2020-12-09T10:06:33Z | 2020-12-10T15:37:56Z | 2020-12-10T15:37:56Z | null | # MKQA: Multilingual Knowledge Questions & Answers Dataset
Adding the [MKQA](https://github.com/apple/ml-mkqa) dataset as part of the sprint 🎉
There is no official data splits so I added just a `train` split.
differently from the original:
- answer:type field is a ClassLabel (I thought it might be possible to train on this as a label for categorizing questions)
- answer:entity field has a default value of empty string '' (since this key is not available for all in original)
- answer:alias has default value of []
- [x] All tests passed
- [x] Added dummy data
- [x] Added data card (as much as I could)
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1365/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1365",
"merged_at": "2020-12-10T15:37:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1365"
} | true | [
"the `RemoteDatasetTest ` error pf the CI is fixed on master so it's fine",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3830/comments | https://api.github.com/repos/huggingface/datasets/issues/3830/events | https://github.com/huggingface/datasets/issues/3830 | 1,160,181,404 | I_kwDODunzps5FJvac | 3,830 | Got error when load cnn_dailymail dataset | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 2 | 2022-03-05T01:43:12Z | 2022-03-07T06:53:41Z | 2022-03-07T06:53:41Z | null | When using datasets.load_dataset method to load cnn_dailymail dataset, got error as below:
- windows os: FileNotFoundError: [WinError 3] 系统找不到指定的路径。: 'D:\\SourceCode\\DataScience\\HuggingFace\\Data\\downloads\\1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b\\cnn\\stories'
- google colab: NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'
The code is to load dataset:
windows os:
```
from datasets import load_dataset
dataset = load_dataset("cnn_dailymail", "3.0.0", cache_dir="D:\\SourceCode\\DataScience\\HuggingFace\\Data")
```
google colab:
```
import datasets
train_data = datasets.load_dataset("cnn_dailymail", "3.0.0", split="train")
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3830/timeline | null | completed | null | null | false | [
"Was able to reproduce the issue on Colab; full logs below. \r\n\r\n```\r\n---------------------------------------------------------------------------\r\nNotADirectoryError Traceback (most recent call last)\r\n[<ipython-input-2-39967739ba7f>](https://localhost:8080/#) in <module>()\r\n 1... |
https://api.github.com/repos/huggingface/datasets/issues/4076 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4076/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4076/comments | https://api.github.com/repos/huggingface/datasets/issues/4076/events | https://github.com/huggingface/datasets/pull/4076 | 1,188,478,867 | PR_kwDODunzps41a1n2 | 4,076 | Add ROUGE Metric Card | [] | closed | false | null | 1 | 2022-03-31T18:34:34Z | 2022-04-12T20:43:45Z | 2022-04-12T20:37:38Z | null | Add ROUGE metric card.
I've left the 'Values from popular papers' section empty for the time being because I don't know the summarization literature very well and am therefore not sure which paper(s) to pull from (note that the original rouge paper does not seem to present specific values, just correlations with human judgements). Any suggestions on which paper(s) to pull from would be helpful! :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4076/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4076/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4076.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4076",
"merged_at": "2022-04-12T20:37:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4076.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4076"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/825 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/825/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/825/comments | https://api.github.com/repos/huggingface/datasets/issues/825/events | https://github.com/huggingface/datasets/pull/825 | 739,925,960 | MDExOlB1bGxSZXF1ZXN0NTE4NTAyNjgx | 825 | Add accuracy, precision, recall and F1 metrics | [] | closed | false | null | 0 | 2020-11-10T13:50:35Z | 2020-11-11T19:23:48Z | 2020-11-11T19:23:43Z | null | This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only the selected labels (usually what we call the positive labels) and ignore the negative ones. For example in case of a Named Entity Recognition task, positive labels are (`PERSON`, `LOCATION` or `ORGANIZATION`) and the negative one is `O`. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/825/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/825/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/825.diff",
"html_url": "https://github.com/huggingface/datasets/pull/825",
"merged_at": "2020-11-11T19:23:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/825.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/825"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3047 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3047/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3047/comments | https://api.github.com/repos/huggingface/datasets/issues/3047/events | https://github.com/huggingface/datasets/issues/3047 | 1,021,360,616 | I_kwDODunzps484Lno | 3,047 | Loading from cache a dataset for LM built from a text classification dataset sometimes errors | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-08T18:23:11Z | 2021-11-03T17:13:08Z | 2021-11-03T17:13:08Z | null | ## Describe the bug
Yes, I know, that description sucks. So the problem is arising in the course when we build a masked language modeling dataset using the IMDB dataset. To reproduce (or try since it's a bit fickle).
Create a dataset for masled-language modeling from the IMDB dataset.
```python
from datasets import load_dataset
from transformers import Autotokenizer
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased)
imdb_dataset = load_dataset("imdb", split="train")
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
chunk_size = 128
def group_texts(examples):
# Concatenate all texts.
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
# Compute length of concatenated texts
total_length = len(concatenated_examples[list(examples.keys())[0]])
# We drop the last chunk if it's smaller than chunk_size
total_length = (total_length // chunk_size) * chunk_size
# Split by chunks of max_len.
result = {
k: [t[i : i + chunk_size] for i in range(0, total_length, chunk_size)]
for k, t in concatenated_examples.items()
}
# Create a new labels column
result["labels"] = result["input_ids"].copy()
return result
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Until now, all is well. The problem comes when you re-execute that code, more specifically:
```python
tokenized_dataset = imdb_dataset.map(
tokenize_function, batched=True, remove_columns=["text", "label"]
)
lm_dataset = tokenized_dataset.map(group_texts, batched=True)
```
Try several times if the bug doesn't appear instantly, or just each line at a time, ideally in a notebook/Colab and you should get at some point:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-40-357a56ee3d53> in <module>
----> 1 lm_dataset = tokenized_dataset.map(group_texts, batched=True)
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1947 new_fingerprint=new_fingerprint,
1948 disable_tqdm=disable_tqdm,
-> 1949 desc=desc,
1950 )
1951 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
424 }
425 # apply actual function
--> 426 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
427 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
428 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2138 if os.path.exists(cache_file_name) and load_from_cache_file:
2139 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2140 info = self.info.copy()
2141 info.features = features
2142 return Dataset.from_file(cache_file_name, info=info, split=self.split)
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
```
It seems that when loading the cache, the dataset tries to access some kind of text classification template (which I imagine comes from the original dataset) and to look at a key that has since been removed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3047/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3047/timeline | null | completed | null | null | false | [
"This has been fixed in 1.15, let me know if you still have this issue"
] |
https://api.github.com/repos/huggingface/datasets/issues/2534 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2534/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2534/comments | https://api.github.com/repos/huggingface/datasets/issues/2534/events | https://github.com/huggingface/datasets/pull/2534 | 927,201,435 | MDExOlB1bGxSZXF1ZXN0Njc1MzkzODg0 | 2,534 | Sync with transformers disabling NOTSET | [] | closed | false | null | 2 | 2021-06-22T12:54:21Z | 2021-06-24T14:42:47Z | 2021-06-24T14:42:47Z | null | Close #2528. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2534/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2534/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2534.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2534",
"merged_at": "2021-06-24T14:42:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2534.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2534"
} | true | [
"Nice thanks ! I think there are other places with\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nCould you replace them as well ?",
"Sure @lhoestq! I was not sure if this change should only be circumscribed to `http_get`..."
] |
https://api.github.com/repos/huggingface/datasets/issues/2470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2470/comments | https://api.github.com/repos/huggingface/datasets/issues/2470/events | https://github.com/huggingface/datasets/issues/2470 | 916,724,260 | MDU6SXNzdWU5MTY3MjQyNjA= | 2,470 | Crash when `num_proc` > dataset length for `map()` on a `datasets.Dataset`. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2021-06-09T22:40:22Z | 2021-07-01T09:34:54Z | 2021-07-01T09:11:13Z | null | ## Describe the bug
Crash if when using `num_proc` > 1 (I used 16) for `map()` on a `datasets.Dataset`.
I believe I've had cases where `num_proc` > 1 works before, but now it seems either inconsistent, or depends on my data. I'm not sure whether the issue is on my end, because it's difficult for me to debug! Any tips greatly appreciated, I'm happy to provide more info if it would helps us diagnose.
## Steps to reproduce the bug
```python
# this function will be applied with map()
def tokenize_function(examples):
return tokenizer(
examples["text"],
padding=PaddingStrategy.DO_NOT_PAD,
truncation=True,
)
# data_files is a Dict[str, str] mapping name -> path
datasets = load_dataset("text", data_files={...})
# this is where the error happens if num_proc = 16,
# but is fine if num_proc = 1
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=num_workers,
)
```
## Expected results
The `map()` function succeeds with `num_proc` > 1.
## Actual results


## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.0-73-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.8.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, but I think N/A for this issue
- Using distributed or parallel set-up in script?: Multi-GPU on one machine, but I think also N/A for this issue
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2470/timeline | null | completed | null | null | false | [
"Hi ! It looks like the issue comes from pyarrow. What version of pyarrow are you using ? How did you install it ?",
"Thank you for the quick reply! I have `pyarrow==4.0.0`, and I am installing with `pip`. It's not one of my explicit dependencies, so I assume it came along with something else.",
"Could you tryi... |
https://api.github.com/repos/huggingface/datasets/issues/3032 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3032/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3032/comments | https://api.github.com/repos/huggingface/datasets/issues/3032/events | https://github.com/huggingface/datasets/issues/3032 | 1,016,488,475 | I_kwDODunzps48lmIb | 3,032 | Error when loading private dataset with "data_files" arg | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-05T15:46:27Z | 2021-10-12T15:26:22Z | 2021-10-12T15:25:46Z | null | ## Describe the bug
A clear and concise description of what the bug is.
Private datasets with no loading script can't be loaded using `data_files` parameter.
## Steps to reproduce the bug
```python
from datasets import load_dataset
data_files = {"train": "**/train/*/*.jsonl", "valid": "**/valid/*/*.jsonl"}
dataset = load_dataset('dalle-mini/encoded', data_files=data_files, use_auth_token=True, streaming=True)
```
Same error happens in non-streaming mode.
## Expected results
Files should be loaded (whether in streaming or not).
## Actual results
Error:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
539 try:
--> 540 local_path = cached_path(file_path, download_config=download_config)
541 except FileNotFoundError:
8 frames
FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/dalle-mini/encoded/resolve/main/encoded.py
During handling of the above exception, another exception occurred:
HTTPError Traceback (most recent call last)
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/api/datasets/dalle-mini/encoded?full=true
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, return_associated_base_path, data_files, **download_kwargs)
547 except Exception:
548 raise FileNotFoundError(
--> 549 f"Couldn't find a directory or a {resource_type} named '{path}'. "
550 f"It doesn't exist locally at {expected_dir_for_combined_path_abs} or remotely on {hf_api.endpoint}/datasets"
551 )
FileNotFoundError: Couldn't find a directory or a dataset named 'dalle-mini/encoded'. It doesn't exist locally at /content/dalle-mini/encoded or remotely on https://huggingface.co/datasets
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.12.1
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 3.0.0
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3032/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3032/timeline | null | completed | null | null | false | [
"We'll do a release tomorrow or on wednesday to make the fix available :)\r\n\r\nThanks for reproting !"
] |
https://api.github.com/repos/huggingface/datasets/issues/449 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/449/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/449/comments | https://api.github.com/repos/huggingface/datasets/issues/449/events | https://github.com/huggingface/datasets/pull/449 | 666,898,923 | MDExOlB1bGxSZXF1ZXN0NDU3NjY0NjYx | 449 | add reuters21578 dataset | [] | closed | false | null | 3 | 2020-07-28T08:58:12Z | 2020-08-03T11:10:31Z | 2020-08-03T11:10:31Z | null | This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html
#353
The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read line by line (maybe there is a better way to do, happy to get your opinion on it)
In the Readme file 3 ways to split the dataset are given.:
- The Modified Lewis ("ModLewis") Split: train, test and unused-set
- The Modified Apte ("ModApte") Split : train, test and unused-set
- The Modified Hayes ("ModHayes") Split: train and test
Here I consider the last one as the readme file highlight that this split provides the ability to compare results with those of the 2 first splits.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/449/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/449/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/449.diff",
"html_url": "https://github.com/huggingface/datasets/pull/449",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/449.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/449"
} | true | [
"> Awesome !\r\n> Good job on parsing these files :O\r\n> \r\n> Do you think it would be hard to get the two other split configurations ?\r\n\r\nIt shouldn't be that hard, I think I can consider different config names for each split ",
"> > Awesome !\r\n> > Good job on parsing these files :O\r\n> > Do you think i... |
https://api.github.com/repos/huggingface/datasets/issues/4079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4079/comments | https://api.github.com/repos/huggingface/datasets/issues/4079/events | https://github.com/huggingface/datasets/pull/4079 | 1,189,521,576 | PR_kwDODunzps41eYRC | 4,079 | Increase max retries for GitHub datasets | [] | closed | false | null | 1 | 2022-04-01T09:34:03Z | 2022-04-01T15:32:40Z | 2022-04-01T15:27:11Z | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub datasets, as previously done for GitHub metrics:
- #4063
Note that this is a temporary solution, while we decide when and how to load GitHub datasets from the Hub:
- #4059
Fix #2048
Related to:
- #4051
- #3210
- #2787
- #2075
- #2036
CC: @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4079/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4079",
"merged_at": "2022-04-01T15:27:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4079"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2624/comments | https://api.github.com/repos/huggingface/datasets/issues/2624/events | https://github.com/huggingface/datasets/issues/2624 | 941,318,247 | MDU6SXNzdWU5NDEzMTgyNDc= | 2,624 | can't set verbosity for `metric.py` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-07-10T20:23:45Z | 2021-07-12T05:54:29Z | 2021-07-12T05:54:29Z | null | ## Describe the bug
```
[2021-07-10 20:13:11,528][datasets.utils.filelock][INFO] - Lock 139705371374976 acquired on /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.lock
[2021-07-10 20:13:11,529][datasets.arrow_writer][INFO] - Done writing 32 examples in 6100 bytes /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow.
[2021-07-10 20:13:11,531][datasets.arrow_dataset][INFO] - Set __getitem__(key) output type to python objects for no columns (when key is int or slice) and don't output other (un-formatted) columns.
[2021-07-10 20:13:11,543][/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric.py][INFO] - Removing /root/.cache/huggingface/metrics/seqeval/default/default_experiment-1-0.arrow
```
As you can see, `datasets` logging come from different places.
`filelock`, `arrow_writer` & `arrow_dataset` comes from `datasets.*` which are expected
However, `metric.py` logging comes from `/conda/envs/myenv/lib/python3.8/site-packages/datasets/`
So when setting `datasets.utils.logging.set_verbosity_error()`, it still logs the last message which is annoying during evaluation.
I had to do
```
logging.getLogger("/conda/envs/myenv/lib/python3.8/site-packages/datasets/metric").setLevel(logging.ERROR)
```
to fully mute these messages
## Expected results
it shouldn't log these messages when setting `datasets.utils.logging.set_verbosity_error()`
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: tried both 1.8.0 & 1.9.0
- Platform: Ubuntu 18.04.5 LTS
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2624/timeline | null | completed | null | null | false | [
"Thanks @thomas-happify for reporting and thanks @mariosasko for the fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/981/comments | https://api.github.com/repos/huggingface/datasets/issues/981/events | https://github.com/huggingface/datasets/pull/981 | 754,937,612 | MDExOlB1bGxSZXF1ZXN0NTMwNzQ0MTYx | 981 | add wisesight_sentiment take2 | [] | closed | false | null | 0 | 2020-12-02T04:50:59Z | 2020-12-02T10:37:13Z | 2020-12-02T10:37:13Z | null | Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/981/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/981.diff",
"html_url": "https://github.com/huggingface/datasets/pull/981",
"merged_at": "2020-12-02T10:37:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/981.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/981"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2219/comments | https://api.github.com/repos/huggingface/datasets/issues/2219/events | https://github.com/huggingface/datasets/pull/2219 | 857,321,242 | MDExOlB1bGxSZXF1ZXN0NjE0NzYxMzA3 | 2,219 | Added CUAD dataset | [] | closed | false | null | 3 | 2021-04-13T21:05:03Z | 2021-04-24T14:25:51Z | 2021-04-16T08:50:44Z | null | Dataset link : https://github.com/TheAtticusProject/cuad/
Working on README.md currently.
Closes #2084 and [#1](https://github.com/TheAtticusProject/cuad/issues/1). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2219/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2219/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2219.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2219",
"merged_at": "2021-04-16T08:50:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2219.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2219"
} | true | [
"1) Changed the language in a few places apart from those you mentioned in README\r\n2) Reduced the size of dummy data folder by removing all other entries except the first\r\n3) Updated YAML tags by using to the past version of `datasets-tagging` app. Will update the quick fix on that repository too in a while",
... |
https://api.github.com/repos/huggingface/datasets/issues/2135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2135/comments | https://api.github.com/repos/huggingface/datasets/issues/2135/events | https://github.com/huggingface/datasets/issues/2135 | 843,246,344 | MDU6SXNzdWU4NDMyNDYzNDQ= | 2,135 | en language data from MLQA dataset is missing | [] | closed | false | null | 3 | 2021-03-29T10:47:50Z | 2021-03-30T10:20:23Z | 2021-03-30T10:20:23Z | null | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2135/timeline | null | completed | null | null | false | [
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, ... |
https://api.github.com/repos/huggingface/datasets/issues/1596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1596/comments | https://api.github.com/repos/huggingface/datasets/issues/1596/events | https://github.com/huggingface/datasets/pull/1596 | 770,260,531 | MDExOlB1bGxSZXF1ZXN0NTQyMDM3NTU0 | 1,596 | made suggested changes to hate-speech-and-offensive-language | [] | closed | false | null | 0 | 2020-12-17T18:09:26Z | 2020-12-17T18:36:02Z | 2020-12-17T18:35:53Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1596/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1596.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1596",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1596.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1596"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/3856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3856/comments | https://api.github.com/repos/huggingface/datasets/issues/3856/events | https://github.com/huggingface/datasets/pull/3856 | 1,162,522,034 | PR_kwDODunzps40GUSf | 3,856 | Fix push_to_hub with null images | [] | closed | false | null | 1 | 2022-03-08T11:07:09Z | 2022-03-08T15:22:17Z | 2022-03-08T15:22:16Z | null | This code currently raises an error because of the null image:
```python
import datasets
dataset_dict = { 'name': ['image001.jpg', 'image002.jpg'], 'image': ['cat.jpg', None] }
features = datasets.Features({
'name': datasets.Value('string'),
'image': datasets.Image(),
})
dataset = datasets.Dataset.from_dict(dataset_dict, features)
dataset.push_to_hub("username/dataset") # this line produces an error: 'NoneType' object is not subscriptable
```
I fixed this in this PR
TODO:
- [x] add a test | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3856/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3856.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3856",
"merged_at": "2022-03-08T15:22:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3856.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3856"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3856). All of your documentation changes will be reflected on that endpoint."
] |
https://api.github.com/repos/huggingface/datasets/issues/3392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3392/comments | https://api.github.com/repos/huggingface/datasets/issues/3392/events | https://github.com/huggingface/datasets/issues/3392 | 1,073,073,408 | I_kwDODunzps4_9c0A | 3,392 | Dataset viewer issue for `dansbecker/hackernews_hiring_posts` | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2021-12-07T08:41:01Z | 2021-12-07T14:04:28Z | 2021-12-07T14:04:28Z | null | ## Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
**Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts
*short description of the issue*
Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603
Am I the one who added this dataset ?
No -> @dansbecker | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3392/timeline | null | completed | null | null | false | [
"This issue was fixed by me calling `all_datasets.push_to_hub(\"hackernews_hiring_posts\")`.\r\n\r\nThe previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/439 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/439/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/439/comments | https://api.github.com/repos/huggingface/datasets/issues/439/events | https://github.com/huggingface/datasets/issues/439 | 665,964,673 | MDU6SXNzdWU2NjU5NjQ2NzM= | 439 | Issues: Adding a FAISS or Elastic Search index to a Dataset | [] | closed | false | null | 5 | 2020-07-27T04:25:17Z | 2020-10-28T01:46:24Z | 2020-10-28T01:46:24Z | null | It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on the latest PyArrow 1.0.0 ? Is it yet to be made generally available ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/439/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/439/timeline | null | completed | null | null | false | [
"`DPRContextEncoder` and `DPRContextEncoderTokenizer` will be available in the next release of `transformers`.\r\n\r\nRight now you can experiment with it by installing `transformers` from the master branch.\r\nYou can also check the docs of DPR [here](https://huggingface.co/transformers/master/model_doc/dpr.html).... |
https://api.github.com/repos/huggingface/datasets/issues/5167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5167/comments | https://api.github.com/repos/huggingface/datasets/issues/5167/events | https://github.com/huggingface/datasets/pull/5167 | 1,424,124,477 | PR_kwDODunzps5BljPw | 5,167 | Add ffmpeg4 installation instructions in warnings | [] | closed | false | null | 3 | 2022-10-26T14:21:14Z | 2022-10-27T09:01:12Z | 2022-10-27T08:58:58Z | null | Adds instructions on how to install `ffmpeg=4` on Linux (relevant for Colab users).
Looks pretty ugly because I didn't find a way to check `ffmpeg` version from python (without `subprocess.call()`; `ctypes.util.find_library` doesn't work`), so the warning is raised on each decoding. Any suggestions on how to make it look nice are welcome!
This is how it looks on Colab:

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5167/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5167/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5167.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5167",
"merged_at": "2022-10-27T08:58:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5167.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5167"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"To make it warn only once, feel free to use a global counter in python - and if the warning has already been done, you don't do it again",
"> Added the same formatting for the error message :)\r\n\r\nnice!! thank you! \r\n\r\n> Oh ... |
https://api.github.com/repos/huggingface/datasets/issues/1416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1416/comments | https://api.github.com/repos/huggingface/datasets/issues/1416/events | https://github.com/huggingface/datasets/pull/1416 | 760,653,971 | MDExOlB1bGxSZXF1ZXN0NTM1NDUwMTIz | 1,416 | Add Shrinked Turkish NER from Kaggle. | [] | closed | false | null | 0 | 2020-12-09T20:38:35Z | 2020-12-11T11:23:31Z | 2020-12-11T11:23:31Z | null | Add Shrinked Turkish NER from [Kaggle](https://www.kaggle.com/behcetsenturk/shrinked-twnertc-turkish-ner-data-by-kuzgunlar). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1416/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1416",
"merged_at": "2020-12-11T11:23:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1416"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6000/comments | https://api.github.com/repos/huggingface/datasets/issues/6000/events | https://github.com/huggingface/datasets/pull/6000 | 1,782,456,878 | PR_kwDODunzps5UU_FB | 6,000 | Pin `joblib` to avoid `joblibspark` test failures | [] | closed | false | null | 4 | 2023-06-30T12:36:54Z | 2023-06-30T13:17:05Z | 2023-06-30T13:08:27Z | null | `joblibspark` doesn't support the latest `joblib` release.
See https://github.com/huggingface/datasets/actions/runs/5401870932/jobs/9812337078 for the errors | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6000/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6000.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6000",
"merged_at": "2023-06-30T13:08:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6000.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6000"
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/1471 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1471/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1471/comments | https://api.github.com/repos/huggingface/datasets/issues/1471/events | https://github.com/huggingface/datasets/pull/1471 | 761,842,512 | MDExOlB1bGxSZXF1ZXN0NTM2NDUyMzcy | 1,471 | Adding the HAREM dataset | [] | closed | false | null | 5 | 2020-12-11T03:21:10Z | 2020-12-22T10:37:33Z | 2020-12-22T10:37:33Z | null | Adding the HAREM dataset, a Portuguese language dataset for NER tasks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1471/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1471/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1471.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1471",
"merged_at": "2020-12-22T10:37:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1471.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1471"
} | true | [
"Thanks for the changes !\r\n\r\nSorry if I wasn't clear about the suggestion of adding the `raw` dataset as well.\r\nBy `raw` I meant the dataset with its original features, i.e. not tokenized to follow the conll format for NER.\r\nThe `raw` dataset has data fields `doc_text`, `doc_id` and `entities`.",
"Alright... |
https://api.github.com/repos/huggingface/datasets/issues/3696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3696/comments | https://api.github.com/repos/huggingface/datasets/issues/3696/events | https://github.com/huggingface/datasets/pull/3696 | 1,129,764,534 | PR_kwDODunzps4yXXgH | 3,696 | Force unique keys in newsqa dataset | [] | closed | false | null | 0 | 2022-02-10T10:09:19Z | 2022-02-14T08:37:20Z | 2022-02-14T08:37:19Z | null | Currently, it may raise `DuplicatedKeysError`.
Fix #3630. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3696/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3696/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3696.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3696",
"merged_at": "2022-02-14T08:37:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3696.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3696"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1304 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1304/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1304/comments | https://api.github.com/repos/huggingface/datasets/issues/1304/events | https://github.com/huggingface/datasets/pull/1304 | 759,440,841 | MDExOlB1bGxSZXF1ZXN0NTM0NDQ2Nzcy | 1,304 | adding eitb_parcc | [] | closed | false | null | 0 | 2020-12-08T13:20:54Z | 2020-12-09T18:02:54Z | 2020-12-09T18:02:03Z | null | Adding EiTB-ParCC: Parallel Corpus of Comparable News
http://opus.nlpl.eu/EiTB-ParCC.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1304/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1304/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1304.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1304",
"merged_at": "2020-12-09T18:02:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1304.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1304"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2911/comments | https://api.github.com/repos/huggingface/datasets/issues/2911/events | https://github.com/huggingface/datasets/pull/2911 | 996,202,598 | PR_kwDODunzps4rvW7Y | 2,911 | Fix exception chaining | [] | closed | false | null | 0 | 2021-09-14T16:19:29Z | 2021-09-16T15:04:44Z | 2021-09-16T15:04:44Z | null | Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2911/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2911/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2911.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2911",
"merged_at": "2021-09-16T15:04:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2911.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2911"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/224/comments | https://api.github.com/repos/huggingface/datasets/issues/224/events | https://github.com/huggingface/datasets/issues/224 | 627,791,693 | MDU6SXNzdWU2Mjc3OTE2OTM= | 224 | [Feature Request/Help] BLEURT model -> PyTorch | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 5 | 2020-05-30T18:30:40Z | 2023-01-19T15:46:58Z | 2021-01-04T09:53:32Z | null | Hi, I am interested in porting google research's new BLEURT learned metric to PyTorch (because I wish to do something experimental with language generation and backpropping through BLEURT). I noticed that you guys don't have it yet so I am partly just asking if you plan to add it (@thomwolf said you want to do so on Twitter).
I had a go of just like manually using the checkpoint that they publish which includes the weights. It seems like the architecture is exactly aligned with the out-of-the-box BertModel in transformers just with a single linear layer on top of the CLS embedding. I loaded all the weights to the PyTorch model but I am not able to get the same numbers as the BLEURT package's python api. Here is my colab notebook where I tried https://colab.research.google.com/drive/1Bfced531EvQP_CpFvxwxNl25Pj6ptylY?usp=sharing . If you have any pointers on what might be going wrong that would be much appreciated!
Thank you muchly! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/224/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/224/timeline | null | completed | null | null | false | [
"Is there any update on this? \r\n\r\nThanks!",
"Hitting this error when using bleurt with PyTorch ...\r\n\r\n```\r\nUnrecognizedFlagError: Unknown command line flag 'f'\r\n```\r\n... and I'm assuming because it was built for TF specifically. Is there a way to use this metric in PyTorch?",
"We currently provid... |
https://api.github.com/repos/huggingface/datasets/issues/1666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1666/comments | https://api.github.com/repos/huggingface/datasets/issues/1666/events | https://github.com/huggingface/datasets/pull/1666 | 776,432,006 | MDExOlB1bGxSZXF1ZXN0NTQ2OTI2MzQw | 1,666 | Add language to dataset card for Makhzan dataset. | [] | closed | false | null | 0 | 2020-12-30T12:25:52Z | 2020-12-30T17:20:35Z | 2020-12-30T17:20:35Z | null | Add language to dataset card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1666/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1666.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1666",
"merged_at": "2020-12-30T17:20:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1666.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1666"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1512/comments | https://api.github.com/repos/huggingface/datasets/issues/1512/events | https://github.com/huggingface/datasets/pull/1512 | 764,010,722 | MDExOlB1bGxSZXF1ZXN0NTM4Mjc5MzIy | 1,512 | Add Hippocorpus Dataset | [] | closed | false | null | 0 | 2020-12-12T16:17:53Z | 2020-12-13T05:09:08Z | 2020-12-13T05:08:58Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1512/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1512/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1512.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1512",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1512.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1512"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/5758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5758/comments | https://api.github.com/repos/huggingface/datasets/issues/5758/events | https://github.com/huggingface/datasets/pull/5758 | 1,669,920,923 | PR_kwDODunzps5OaY9S | 5,758 | Fixes #5757 | [] | closed | false | null | 5 | 2023-04-16T11:56:01Z | 2023-04-20T15:37:49Z | 2023-04-20T15:30:48Z | null | Fixes the bug #5757 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5758/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5758.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5758",
"merged_at": "2023-04-20T15:30:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5758.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5758"
} | true | [
"The CI can be fixed by merging `main` into your branch. Can you do that before we merge ?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Done.\n\nOn Thu, Apr 20, 2023 at 6:01 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> The CI can be fixed by merging main into your branch. Ca... |
https://api.github.com/repos/huggingface/datasets/issues/2151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2151/comments | https://api.github.com/repos/huggingface/datasets/issues/2151/events | https://github.com/huggingface/datasets/pull/2151 | 844,886,081 | MDExOlB1bGxSZXF1ZXN0NjA0MDg5MDMw | 2,151 | Add support for axis in concatenate datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-04-20T16:50:46Z",
"closed_issues": 4,
"created_at": "2021-04-09T13:07:51Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-04-16T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/1",
"id": 6644198,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/1/labels",
"node_id": "MDk6TWlsZXN0b25lNjY0NDE5OA==",
"number": 1,
"open_issues": 0,
"state": "closed",
"title": "1.6",
"updated_at": "2021-04-20T16:50:46Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/1"
} | 5 | 2021-03-30T16:58:44Z | 2021-06-23T17:41:02Z | 2021-04-19T16:07:18Z | null | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2151/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2151/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2151.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2151",
"merged_at": "2021-04-19T16:07:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2151.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2151"
} | true | [
"@lhoestq I am going to implement the consolidation step you mentioned in #1870.",
"@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?\r\n\r\nI mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:\r\n```\r\nblocks = [in_memory_1, memory_mapped, in_mem... |
https://api.github.com/repos/huggingface/datasets/issues/828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/828/comments | https://api.github.com/repos/huggingface/datasets/issues/828/events | https://github.com/huggingface/datasets/pull/828 | 740,008,683 | MDExOlB1bGxSZXF1ZXN0NTE4NTcwMjY3 | 828 | Add writer_batch_size attribute to GeneratorBasedBuilder | [] | closed | false | null | 0 | 2020-11-10T15:28:19Z | 2020-11-10T16:27:36Z | 2020-11-10T16:27:36Z | null | As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/828/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/828",
"merged_at": "2020-11-10T16:27:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/828"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5598/comments | https://api.github.com/repos/huggingface/datasets/issues/5598/events | https://github.com/huggingface/datasets/pull/5598 | 1,605,018,478 | PR_kwDODunzps5LCMiX | 5,598 | Fix push_to_hub with no dataset_infos | [] | closed | false | null | 2 | 2023-03-01T13:54:06Z | 2023-03-02T13:47:13Z | 2023-03-02T13:40:17Z | null | As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags
cc @clefourrier | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5598/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5598.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5598",
"merged_at": "2023-03-02T13:40:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5598.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5598"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1253/comments | https://api.github.com/repos/huggingface/datasets/issues/1253/events | https://github.com/huggingface/datasets/pull/1253 | 758,517,391 | MDExOlB1bGxSZXF1ZXN0NTMzNjc4MDE1 | 1,253 | add thainer | [] | closed | false | null | 0 | 2020-12-07T13:41:54Z | 2020-12-08T14:44:49Z | 2020-12-08T14:44:49Z | null | ThaiNER (v1.3) is a 6,456-sentence named entity recognition dataset created from expanding the 2,258-sentence
[unnamed dataset](http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) by
[Tirasaroj and Aroonmanakun (2012)](http://pioneer.chula.ac.th/~awirote/publications/).
It is used to train NER taggers in [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp).
The NER tags are annotated by [Tirasaroj and Aroonmanakun (2012)]((http://pioneer.chula.ac.th/~awirote/publications/))
for 2,258 sentences and the rest by [@wannaphong](https://github.com/wannaphong/).
The POS tags are done by [PyThaiNLP](https://github.com/PyThaiNLP/pythainlp)'s `perceptron` engine trained on `orchid_ud`.
[@wannaphong](https://github.com/wannaphong/) is now the only maintainer of this dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1253/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1253",
"merged_at": "2020-12-08T14:44:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1253"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | [] | closed | false | null | 2 | 2021-02-20T14:18:10Z | 2021-03-03T17:40:27Z | 2021-03-03T17:40:27Z | null | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load_dataset
squad = load_dataset("squad") # or any other dataset
squad.save_to_disk("squad") # error here
```
The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves.
I'll open a PR soon doing that and linking this issue.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1919/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | completed | null | null | false | [
"Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !",
"Closing since this has been fixed by #1923"
] |
https://api.github.com/repos/huggingface/datasets/issues/2545 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2545/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2545/comments | https://api.github.com/repos/huggingface/datasets/issues/2545/events | https://github.com/huggingface/datasets/pull/2545 | 929,016,580 | MDExOlB1bGxSZXF1ZXN0Njc2OTMxOTYw | 2,545 | Fix DuplicatedKeysError in drop dataset | [] | closed | false | null | 0 | 2021-06-24T09:10:39Z | 2021-06-24T14:57:08Z | 2021-06-24T14:57:08Z | null | Close #2542.
cc: @VictorSanh. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2545/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2545/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2545.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2545",
"merged_at": "2021-06-24T14:57:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2545.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2545"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/88 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/88/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/88/comments | https://api.github.com/repos/huggingface/datasets/issues/88/events | https://github.com/huggingface/datasets/pull/88 | 617,284,664 | MDExOlB1bGxSZXF1ZXN0NDE3MjI5ODQw | 88 | Add wiki40b | [] | closed | false | null | 1 | 2020-05-13T09:16:01Z | 2020-05-13T12:31:55Z | 2020-05-13T12:31:54Z | null | This one is a beam dataset that downloads files using tensorflow.
I tested it on a small config and it works fine | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/88/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/88/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/88.diff",
"html_url": "https://github.com/huggingface/datasets/pull/88",
"merged_at": "2020-05-13T12:31:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/88.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/88"
} | true | [
"Looks good to me. I have not really looked too much into the Beam Datasets yet though - so I think you can merge whenever you think is good for Beam datasets :-) "
] |
https://api.github.com/repos/huggingface/datasets/issues/1290 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1290/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1290/comments | https://api.github.com/repos/huggingface/datasets/issues/1290/events | https://github.com/huggingface/datasets/issues/1290 | 759,339,989 | MDU6SXNzdWU3NTkzMzk5ODk= | 1,290 | imdb dataset cannot be downloaded | [] | closed | false | null | 3 | 2020-12-08T10:47:36Z | 2020-12-24T17:38:09Z | 2020-12-24T17:38:09Z | null | hi
please find error below getting imdb train spli:
thanks
`
datasets.load_dataset>>> datasets.load_dataset("imdb", split="train")`
errors
```
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
Downloading and preparing dataset imdb/plain_text (download: 80.23 MiB, generated: 127.06 MiB, post-processed: Unknown size, total: 207.28 MiB) to /idiap/temp/rkarimi/cache_home_1/datasets/imdb/plain_text/1.0.0/90099cb476936b753383ba2ae6ab2eae419b2e87f71cd5189cb9c8e5814d12a3...
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets
cahce dir /idiap/temp/rkarimi/cache_home_1/datasets/downloads
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
ignore_verifications=ignore_verifications,
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 558, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/utils/info_utils.py", line 73, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=7486451, num_examples=5628, dataset_name='imdb')}]
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1290/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1290/timeline | null | completed | null | null | false | [
"Hi @rabeehk , I am unable to reproduce your problem locally.\r\nCan you try emptying the cache (removing the content of `/idiap/temp/rkarimi/cache_home_1/datasets`) and retry ?",
"Hi,\r\nthanks, I did remove the cache and still the same error here\r\n\r\n```\r\n>>> a = datasets.load_dataset(\"imdb\", split=\"tra... |
https://api.github.com/repos/huggingface/datasets/issues/2753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2753/comments | https://api.github.com/repos/huggingface/datasets/issues/2753/events | https://github.com/huggingface/datasets/pull/2753 | 959,036,995 | MDExOlB1bGxSZXF1ZXN0NzAyMjEyMjMz | 2,753 | Generate metadata JSON for reclor dataset | [] | closed | false | null | 0 | 2021-08-03T11:52:29Z | 2021-08-04T08:07:15Z | 2021-08-04T08:07:15Z | null | Related to #2743. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2753/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2753.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2753",
"merged_at": "2021-08-04T08:07:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2753.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2753"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2781/comments | https://api.github.com/repos/huggingface/datasets/issues/2781/events | https://github.com/huggingface/datasets/issues/2781 | 964,805,351 | MDU6SXNzdWU5NjQ4MDUzNTE= | 2,781 | Latest v2.0.0 release of sacrebleu has broken some metrics | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-08-10T09:59:41Z | 2021-08-10T11:16:07Z | 2021-08-10T11:16:07Z | null | ## Describe the bug
After `sacrebleu` v2.0.0 release (see changes here: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15), some of `datasets` metrics are broken:
- Default tokenizer `sacrebleu.DEFAULT_TOKENIZER` no longer exists:
- #2739
- #2778
- Bleu tokenizers are no longer accessible with `sacrebleu.TOKENIZERS`:
- #2779
- `corpus_bleu` args have been renamed from `(sys_stream, ref_streams)` to `(hipotheses, references)`:
- #2782 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2781/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3340/comments | https://api.github.com/repos/huggingface/datasets/issues/3340/events | https://github.com/huggingface/datasets/pull/3340 | 1,067,292,636 | PR_kwDODunzps4vMP6Z | 3,340 | Fix JSON ClassLabel casting for integers | [] | closed | false | null | 0 | 2021-11-30T14:19:54Z | 2021-12-01T11:27:30Z | 2021-12-01T11:27:30Z | null | Loading a JSON dataset with ClassLabel feature types currently fails if the JSON data already has integers. Indeed currently it tries to convert the strings to integers without even checking if the data are not integers already.
For example this currently fails:
```python
from datasets import load_dataset, Features, ClassLabel
path = "data.json"
f = Features({"a": ClassLabel(names=["neg", "pos"])})
d = load_dataset("json", data_files=path, features=f)
```
data.json
```json
{"a": 0}
{"a": 1}
```
I fixed that by adding a line that checks the type of the JSON data before trying to convert them
cc @albertvillanova let me know if it sounds good to you | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3340",
"merged_at": "2021-12-01T11:27:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3340"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4894/comments | https://api.github.com/repos/huggingface/datasets/issues/4894/events | https://github.com/huggingface/datasets/pull/4894 | 1,350,667,270 | PR_kwDODunzps49yIvr | 4,894 | Add citation information to makhzan dataset | [] | closed | false | null | 1 | 2022-08-25T10:16:40Z | 2022-08-30T06:21:54Z | 2022-08-25T13:19:41Z | null | This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4894/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4894.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4894",
"merged_at": "2022-08-25T13:19:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4894.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4894"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2364 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2364/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2364/comments | https://api.github.com/repos/huggingface/datasets/issues/2364/events | https://github.com/huggingface/datasets/pull/2364 | 892,420,500 | MDExOlB1bGxSZXF1ZXN0NjQ1MTI4MDYx | 2,364 | README updated for SNLI, MNLI | [] | closed | false | null | 2 | 2021-05-15T11:37:59Z | 2021-05-17T14:14:27Z | 2021-05-17T13:34:19Z | null | Closes #2275. Mentioned about -1 labels in MNLI, SNLI and how they should be removed before training. @lhoestq `check_code_quality` test might fail for MNLI as the license name `other-Open Portion of the American National Corpus` is not a registered tag for 'licenses' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2364/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2364/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2364.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2364",
"merged_at": "2021-05-17T13:34:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2364.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2364"
} | true | [
"Regarding the license issue, I think we should allow it since it starts with `other-`. Cc @gchhablani what do you think ?",
"@lhoestq I agree, I'll look into it."
] |
https://api.github.com/repos/huggingface/datasets/issues/3126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3126/comments | https://api.github.com/repos/huggingface/datasets/issues/3126/events | https://github.com/huggingface/datasets/issues/3126 | 1,032,093,055 | I_kwDODunzps49hH1_ | 3,126 | "arabic_billion_words" dataset does not create the full dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-21T06:02:38Z | 2021-10-22T13:28:40Z | 2021-10-22T13:28:40Z | null | ## Describe the bug
When running:
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
the correct dataset file is pulled from the url.
But, the generated dataset includes just a small portion of the data included in the file.
This is true for all other portions of the "arabic_billion_words" dataset ('Almasryalyoum',.....)
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
raw_dataset = load_dataset('arabic_billion_words','Alittihad')
#The screen message
Downloading and preparing dataset arabic_billion_words/Alittihad (download: 332.13 MiB, generated: 20.62 MiB, post-processed: Unknown size, total: 352.74 MiB)
## Expected results
over 100K sentences
## Actual results
only 11K sentences
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.14.0
- Platform: Linux-5.8.0-63-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3126/timeline | null | completed | null | null | false | [
"Thanks for reporting, @vitalyshalumov.\r\n\r\nApparently the script to parse the data has a bug, and does not generate the entire dataset.\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/5923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5923/comments | https://api.github.com/repos/huggingface/datasets/issues/5923/events | https://github.com/huggingface/datasets/issues/5923 | 1,737,436,227 | I_kwDODunzps5njyxD | 5,923 | Cannot import datasets - ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility | [] | open | false | null | 13 | 2023-06-02T04:16:32Z | 2023-07-23T20:39:59Z | null | null | ### Describe the bug
When trying to import datasets, I get a pyarrow ValueError:
Traceback (most recent call last):
File "/Users/edward/test/test.py", line 1, in <module>
import datasets
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/__init__.py", line 43, in <module>
from .arrow_dataset import Dataset
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 65, in <module>
from .arrow_reader import ArrowReader
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/datasets/arrow_reader.py", line 28, in <module>
import pyarrow.parquet as pq
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/__init__.py", line 20, in <module>
from .core import *
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 45, in <module>
from pyarrow.fs import (LocalFileSystem, FileSystem, FileType,
File "/Users/edward/opt/anaconda3/envs/cs235/lib/python3.9/site-packages/pyarrow/fs.py", line 49, in <module>
from pyarrow._gcsfs import GcsFileSystem # noqa
File "pyarrow/_gcsfs.pyx", line 1, in init pyarrow._gcsfs
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
`import datasets`
### Expected behavior
Successful import
### Environment info
Conda environment, MacOS
python 3.9.12
datasets 2.12.0
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5923/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5923/timeline | null | null | null | null | false | [
"Based on https://github.com/rapidsai/cudf/issues/10187, this probably means your `pyarrow` installation is not compatible with `datasets`.\r\n\r\nCan you please execute the following commands in the terminal and paste the output here?\r\n```\r\nconda list | grep arrow\r\n``` \r\n```\r\npython -c \"import pyarrow; ... |
https://api.github.com/repos/huggingface/datasets/issues/5715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5715/comments | https://api.github.com/repos/huggingface/datasets/issues/5715/events | https://github.com/huggingface/datasets/issues/5715 | 1,657,479,788 | I_kwDODunzps5iyyJs | 5,715 | Return Numpy Array (fixed length) Mode, in __get_item__, Instead of List | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 1 | 2023-04-06T13:57:48Z | 2023-04-20T17:16:26Z | 2023-04-20T17:16:26Z | null | ### Feature request
There are old known issues, but they can be easily forgettable problems in multiprocessing with pytorch-dataloader:
Too high usage of RAM or shared-memory in pytorch when we set num workers > 1 and returning type of dataset or dataloader is "List" or "Dict".
https://github.com/pytorch/pytorch/issues/13246
With huggingface datasets, unfortunately, the default return type is the list, so the problem is raised too often if we do not set anything for the issue.
However, this issue can be released when the returning output is fixed in length.
Therefore, I request the mode, returning outputs with fixed length (e.g. numpy array) rather than list.
The design would be good when we load datasets as
```python
load_dataset(..., with_return_as_fixed_tensor=True)
```
### Motivation
The general solution for this issue is already in the comments: https://github.com/pytorch/pytorch/issues/13246#issuecomment-905703662
: Numpy or Pandas seems not to have problems, while both have the string type.
(I'm not sure that the sequence of huggingface datasets can solve this problem as well)
### Your contribution
I'll read it ! thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5715/timeline | null | completed | null | null | false | [
"Hi! \r\n\r\nYou can use [`.set_format(\"np\")`](https://huggingface.co/docs/datasets/process#format) to get NumPy arrays (or Pytorch tensors with `.set_format(\"torch\")`) in `__getitem__`.\r\n\r\nAlso, have you been able to reproduce the linked PyTorch issue with a HF dataset?\r\n "
] |
https://api.github.com/repos/huggingface/datasets/issues/5767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5767/comments | https://api.github.com/repos/huggingface/datasets/issues/5767/events | https://github.com/huggingface/datasets/issues/5767 | 1,672,433,979 | I_kwDODunzps5jr1E7 | 5,767 | How to use Distill-BERT with different datasets? | [] | closed | false | null | 1 | 2023-04-18T06:25:12Z | 2023-04-20T16:52:05Z | 2023-04-20T16:52:05Z | null | ### Describe the bug
- `transformers` version: 4.11.3
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0+cu102 (True)
- Tensorflow version (GPU?): 2.10.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Steps to reproduce the bug
I recently read [this](https://huggingface.co/docs/transformers/quicktour#train-with-tensorflow:~:text=The%20most%20important%20thing%20to%20remember%20is%20you%20need%20to%20instantiate%20a%20tokenizer%20with%20the%20same%20model%20name%20to%20ensure%20you%E2%80%99re%20using%20the%20same%20tokenization%20rules%20a%20model%20was%20pretrained%20with.) and was wondering how to use distill-BERT (which is pre-trained with imdb dataset) with a different dataset (for eg. [this](https://huggingface.co/datasets/yhavinga/imdb_dutch) dataset)?
### Expected behavior
Distill-BERT should work with different datasets.
### Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-58-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 11.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5767/timeline | null | completed | null | null | false | [
"Closing this one in favor of the same issue opened in the `transformers` repo."
] |
https://api.github.com/repos/huggingface/datasets/issues/4126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4126/comments | https://api.github.com/repos/huggingface/datasets/issues/4126/events | https://github.com/huggingface/datasets/issues/4126 | 1,196,665,194 | I_kwDODunzps5HU6lq | 4,126 | dataset viewer issue for common_voice | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
},
{
"color": "F... | closed | false | null | 2 | 2022-04-07T23:34:28Z | 2022-04-25T13:42:17Z | 2022-04-25T13:42:16Z | null | ## Dataset viewer issue for 'common_voice'
**Link:** https://huggingface.co/datasets/common_voice
Server Error
Status code: 400
Exception: TypeError
Message: __init__() got an unexpected keyword argument 'audio_column'
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4126/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4126/timeline | null | completed | null | null | false | [
"Yes, it's a known issue, and we expect to fix it soon.",
"Fixed.\r\n\r\n<img width=\"1393\" alt=\"Capture d’écran 2022-04-25 à 15 42 05\" src=\"https://user-images.githubusercontent.com/1676121/165101176-d729d85b-efff-45a8-bad1-b69223edba5f.png\">\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4022/comments | https://api.github.com/repos/huggingface/datasets/issues/4022/events | https://github.com/huggingface/datasets/pull/4022 | 1,180,816,682 | PR_kwDODunzps41BNeA | 4,022 | Replace dbpedia_14 data url | [] | closed | false | null | 1 | 2022-03-25T13:47:21Z | 2022-03-25T15:03:37Z | 2022-03-25T14:58:49Z | null | I replaced the Google Drive URL of the dataset by the FastAI one, since we've had some issues with Google Drive. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4022/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4022.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4022",
"merged_at": "2022-03-25T14:58:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4022.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4022"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3925/comments | https://api.github.com/repos/huggingface/datasets/issues/3925/events | https://github.com/huggingface/datasets/pull/3925 | 1,169,913,769 | PR_kwDODunzps40eaq8 | 3,925 | Fix main_classes docs index | [] | closed | false | null | 3 | 2022-03-15T16:33:46Z | 2022-03-22T13:49:11Z | 2022-03-22T13:44:04Z | null | Currently the `main_classes` documentation has a wrong index. I believe this comes from issues in the examples of the Translation feature types

| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3925/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3925/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3925.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3925",
"merged_at": "2022-03-22T13:44:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3925.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3925"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hmm it's still not good \r\n\r\n\r\nany idea what could cause this ?",
"Ok fixed :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/4824 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4824/comments | https://api.github.com/repos/huggingface/datasets/issues/4824/events | https://github.com/huggingface/datasets/pull/4824 | 1,335,826,639 | PR_kwDODunzps49BR5H | 4,824 | Fix titles in dataset cards | [] | closed | false | null | 2 | 2022-08-11T11:27:48Z | 2022-08-11T13:46:11Z | 2022-08-11T12:56:49Z | null | Fix all the titles in the dataset cards, so that they conform to the required format. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4824/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"merged_at": "2022-08-11T12:56:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4824"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
https://api.github.com/repos/huggingface/datasets/issues/2431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2431/comments | https://api.github.com/repos/huggingface/datasets/issues/2431/events | https://github.com/huggingface/datasets/issues/2431 | 907,413,691 | MDU6SXNzdWU5MDc0MTM2OTE= | 2,431 | DuplicatedKeysError when trying to load adversarial_qa | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-05-31T12:11:19Z | 2021-06-01T08:54:03Z | 2021-06-01T08:52:11Z | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
dataset = load_dataset('adversarial_qa', 'adversarialQA')
```
## Expected results
The dataset should be loaded into memory
## Actual results
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
>
>
>During handling of the above exception, another exception occurred:
>
>DuplicatedKeysError Traceback (most recent call last)
>
>/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py in check_duplicate_keys(self)
> 347 for hash, key in self.hkey_record:
> 348 if hash in tmp_record:
>--> 349 raise DuplicatedKeysError(key)
> 350 else:
> 351 tmp_record.add(hash)
>
>DuplicatedKeysError: FAILURE TO GENERATE DATASET !
>Found duplicate Key: 4d3cb5677211ee32895ca9c66dad04d7152254d4
>Keys should be unique and deterministic in nature
## Environment info
- `datasets` version: 1.7.0
- Platform: Linux-5.4.109+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2431/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\n#2433 fixed the issue, thanks @mariosasko :)\r\n\r\nWe'll do a patch release soon of the library.\r\nIn the meantime, you can use the fixed version of adversarial_qa by adding `script_version=\"master\"` in `load_dataset`"
] |
https://api.github.com/repos/huggingface/datasets/issues/1086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1086/comments | https://api.github.com/repos/huggingface/datasets/issues/1086/events | https://github.com/huggingface/datasets/pull/1086 | 756,720,643 | MDExOlB1bGxSZXF1ZXN0NTMyMjIzNDEy | 1,086 | adding cdt dataset | [] | closed | false | null | 2 | 2020-12-04T01:28:11Z | 2020-12-04T15:04:02Z | 2020-12-04T15:04:02Z | null | - **Name:** *Cyberbullying Detection Task*
- **Description:** *The Cyberbullying Detection task was part of 2019 edition of PolEval competition. The goal is to predict if a given Twitter message contains a cyberbullying (harmful) content.*
- **Data:** *https://github.com/ptaszynski/cyberbullying-Polish*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.* | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1086/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1086.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1086",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1086.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1086"
} | true | [
"> Thanks for adding this one !\r\n> \r\n> I left a few comments\r\n> \r\n> after the change you'll need to regenerate the dataset_infos.json file as well\r\n\r\ndataset_infos.json regenerated",
"looks like this PR includes changes to many files other that the ones for CDT\r\ncould you create another branch and a... |
https://api.github.com/repos/huggingface/datasets/issues/2 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2/comments | https://api.github.com/repos/huggingface/datasets/issues/2/events | https://github.com/huggingface/datasets/issues/2 | 599,767,671 | MDU6SXNzdWU1OTk3Njc2NzE= | 2 | Issue to read a local dataset | [] | closed | false | null | 5 | 2020-04-14T18:18:51Z | 2020-05-11T18:55:23Z | 2020-05-11T18:55:22Z | null | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwargs):
super(BbcConfig, self).__init__(**kwargs)
class Bbc(nlp.GeneratorBasedBuilder):
_DIR = "./data"
_DEV_FILE = "test.csv"
_TRAINING_FILE = "train.csv"
BUILDER_CONFIGS = [BbcConfig(name="bbc", version=nlp.Version("1.0.0"))]
def _info(self):
return nlp.DatasetInfo(builder=self, features=nlp.features.FeaturesDict({"id": nlp.string, "text": nlp.string, "label": nlp.string}))
def _split_generators(self, dl_manager):
files = {"train": os.path.join(self._DIR, self._TRAINING_FILE), "dev": os.path.join(self._DIR, self._DEV_FILE)}
return [nlp.SplitGenerator(name=nlp.Split.TRAIN, gen_kwargs={"filepath": files["train"]}),
nlp.SplitGenerator(name=nlp.Split.VALIDATION, gen_kwargs={"filepath": files["dev"]})]
def _generate_examples(self, filepath):
with open(filepath) as f:
reader = csv.reader(f, delimiter=',', quotechar="\"")
lines = list(reader)[1:]
for idx, line in enumerate(lines):
yield idx, {"idx": idx, "text": line[1], "label": line[0]}
```
The dataset is attached to this issue as well:
[data.zip](https://github.com/huggingface/datasets/files/4476928/data.zip)
Now the steps to reproduce what I would like to do:
1. unzip data locally (I know the nlp lib can detect and extract archives but I want to reduce and facilitate the reproduction as much as possible)
2. create the `bbc.py` script as above at the same location than the unziped `data` folder.
Now I try to load the dataset in three different ways and none works, the first one with the name of the dataset like I would do with TFDS:
```python
import nlp
from bbc import Bbc
dataset = nlp.load("bbc")
```
I get:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 88, in load_dataset
local_files_only=local_files_only,
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/utils/file_utils.py", line 214, in cached_path
if not is_zipfile(output_path) and not tarfile.is_tarfile(output_path):
File "/opt/anaconda3/envs/transformers/lib/python3.7/zipfile.py", line 203, in is_zipfile
with open(filename, "rb") as fp:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```
But @thomwolf told me that no need to import the script, just put the path of it, then I tried three different way to do:
```python
import nlp
dataset = nlp.load("bbc.py")
```
And
```python
import nlp
dataset = nlp.load("./bbc.py")
```
And
```python
import nlp
dataset = nlp.load("/absolute/path/to/bbc.py")
```
These three ways gives me:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 280, in load
dbuilder: DatasetBuilder = builder(path, name, data_dir=data_dir, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 166, in builder
builder_cls = load_dataset(path, name=name, **builder_kwargs)
File "/opt/anaconda3/envs/transformers/lib/python3.7/site-packages/nlp/load.py", line 124, in load_dataset
dataset_module = importlib.import_module(module_path)
File "/opt/anaconda3/envs/transformers/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'nlp.datasets.2fd72627d92c328b3e9c4a3bf7ec932c48083caca09230cebe4c618da6e93688.bbc'
```
Any idea of what I'm missing? or I might have spot a bug :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2/timeline | null | completed | null | null | false | [
"My first bug report ❤️\r\nLooking into this right now!",
"Ok, there are some news, most good than bad :laughing: \r\n\r\nThe dataset script now became:\r\n```python\r\nimport csv\r\n\r\nimport nlp\r\n\r\n\r\nclass Bbc(nlp.GeneratorBasedBuilder):\r\n VERSION = nlp.Version(\"1.0.0\")\r\n\r\n def __init__(sel... |
https://api.github.com/repos/huggingface/datasets/issues/1057 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1057/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1057/comments | https://api.github.com/repos/huggingface/datasets/issues/1057/events | https://github.com/huggingface/datasets/pull/1057 | 756,331,419 | MDExOlB1bGxSZXF1ZXN0NTMxODkzMjE4 | 1,057 | Adding TamilMixSentiment | [] | closed | false | null | 1 | 2020-12-03T16:04:25Z | 2020-12-04T10:09:34Z | 2020-12-04T10:09:12Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1057/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1057/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1057.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1057",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1057.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1057"
} | true | [
"looks like this pr incldues changes about many other files than the ones for tamilMixSentiment, could you create another branch and another PR ?"
] | |
https://api.github.com/repos/huggingface/datasets/issues/4307 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4307/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4307/comments | https://api.github.com/repos/huggingface/datasets/issues/4307/events | https://github.com/huggingface/datasets/pull/4307 | 1,231,175,639 | PR_kwDODunzps43k-Wo | 4,307 | Add packaged builder configs to the documentation | [] | closed | false | null | 1 | 2022-05-10T13:34:19Z | 2022-05-10T14:03:50Z | 2022-05-10T13:55:54Z | null | Add the packaged builders configurations to the docs reference is useful to show the list of all parameters one can use when loading data in many formats: CSV, JSON, etc. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4307/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4307/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4307.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4307",
"merged_at": "2022-05-10T13:55:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4307.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4307"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4537/comments | https://api.github.com/repos/huggingface/datasets/issues/4537/events | https://github.com/huggingface/datasets/pull/4537 | 1,279,144,310 | PR_kwDODunzps46ESJn | 4,537 | Fix WMT dataset loading issue and docs update | [] | closed | false | null | 2 | 2022-06-21T21:48:02Z | 2022-06-24T07:05:43Z | 2022-06-24T07:05:10Z | null | This PR is a fix for #4354
Changes are made for `wmt14`, `wmt15`, `wmt16`, `wmt17`, `wmt18`, `wmt19` and `wmt_t2t`. And READMEs are updated for the corresponding datasets.
As I am on a M1 Mac, I am not able to create a virtual `dev` environment using `pip install -e ".[dev]"`. Issue is with `tensorflow-text` not supported on M1s and there is no supporting repo by Apple or Google. So, if I was needed to perform local testing, I am not able to do that.
Let me know, if any additional changes are required.
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4537/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4537",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4537"
} | true | [
"The PR branch now has some commits unrelated to the changes, probably due to rebasing. Can you please close this PR and open a new one from a new branch? You can use `git cherry-pick` to preserve the relevant changes:\r\n```bash\r\ngit checkout master\r\ngit remote add upstream git@github.com:huggingface/datasets... |
https://api.github.com/repos/huggingface/datasets/issues/1776 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1776/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1776/comments | https://api.github.com/repos/huggingface/datasets/issues/1776/events | https://github.com/huggingface/datasets/issues/1776 | 792,755,249 | MDU6SXNzdWU3OTI3NTUyNDk= | 1,776 | [Question & Bug Report] Can we preprocess a dataset on the fly? | [] | closed | false | null | 6 | 2021-01-24T09:28:24Z | 2021-05-20T04:15:58Z | 2021-05-20T04:15:58Z | null | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_size`. Seems that argument doesn't have any effect when it's larger than `batch_size`, because you are saving all the batch instantly after it's processed. Please check the following code:
https://github.com/huggingface/datasets/blob/0281f9d881f3a55c89aeaa642f1ba23444b64083/src/datasets/arrow_dataset.py#L1532 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1776/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1776/timeline | null | completed | null | null | false | [
"We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?",
"It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm... |
https://api.github.com/repos/huggingface/datasets/issues/3320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3320/comments | https://api.github.com/repos/huggingface/datasets/issues/3320/events | https://github.com/huggingface/datasets/issues/3320 | 1,063,531,992 | I_kwDODunzps4_ZDXY | 3,320 | Can't get tatoeba.rus dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-11-25T12:31:11Z | 2021-11-26T10:30:29Z | 2021-11-26T10:30:29Z | null | ## Describe the bug
It gives an error.
> FileNotFoundError: Couldn't find file at https://github.com/facebookresearch/LASER/raw/master/data/tatoeba/v1/tatoeba.rus-eng.rus
## Steps to reproduce the bug
```python
data=load_dataset("xtreme","tatoeba.rus", split="validation")
```
## Solution
The library tries to access the **master** branch. In the github repo of facebookresearch, it is in the **main** branch. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3320/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/60 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/60/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/60/comments | https://api.github.com/repos/huggingface/datasets/issues/60/events | https://github.com/huggingface/datasets/pull/60 | 614,372,553 | MDExOlB1bGxSZXF1ZXN0NDE0OTQyNjEy | 60 | Update to simplify some datasets conversion | [] | closed | false | null | 6 | 2020-05-07T22:02:24Z | 2020-05-08T10:38:32Z | 2020-05-08T10:18:24Z | null | This PR updates the encoding of `Values` like `integers`, `boolean` and `float` to use python casting and avoid having to cast in the dataset scripts, as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r420176626
We could also change (not included in this PR yet):
- `supervized_keys` to make them a NamedTuple instead of a dataclass, and
- handle specifically the `Translation` features.
as mentioned here: https://github.com/huggingface/nlp/pull/37#discussion_r421740236
@patrickvonplaten @mariamabarham tell me if you want these two last changes as well. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/60/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/60/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/60.diff",
"html_url": "https://github.com/huggingface/datasets/pull/60",
"merged_at": "2020-05-08T10:18:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/60.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/60"
} | true | [
"Awesome! ",
"Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `tf.io.gfile.glob` into `glob.glob` (will need to add `import glob`)",
"> Also we should convert `tf.io.gfile.exists` into `os.path.exists` , `tf.io.gfile.listdir`into `os.listdir` and `... |
https://api.github.com/repos/huggingface/datasets/issues/4492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4492/comments | https://api.github.com/repos/huggingface/datasets/issues/4492/events | https://github.com/huggingface/datasets/pull/4492 | 1,271,112,497 | PR_kwDODunzps45pktu | 4,492 | Pin the revision in imagenet download links | [] | closed | false | null | 1 | 2022-06-14T17:15:17Z | 2022-06-14T17:35:13Z | 2022-06-14T17:25:45Z | null | Use the commit sha in the data files URLs of the imagenet-1k download script, in case we want to restructure the data files in the future. For example we may split it into many more shards for better paralellism.
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4492/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4492/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4492.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4492",
"merged_at": "2022-06-14T17:25:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4492.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4492"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2026/comments | https://api.github.com/repos/huggingface/datasets/issues/2026/events | https://github.com/huggingface/datasets/issues/2026 | 828,194,467 | MDU6SXNzdWU4MjgxOTQ0Njc= | 2,026 | KeyError on using map after renaming a column | [] | closed | false | null | 3 | 2021-03-10T18:54:17Z | 2021-03-11T14:39:34Z | 2021-03-11T14:38:40Z | null | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])])
def prepare_features(examples):
images = []
labels = []
print(examples)
for example_idx, example in enumerate(examples["image"]):
if transform is not None:
images.append(transform(examples["image"][example_idx].permute(2,0,1)))
else:
images.append(examples["image"][example_idx].permute(2,0,1))
labels.append(examples["label"][example_idx])
output = {"label":labels, "image":images}
return output
raw_dataset = load_dataset('cifar10')
raw_dataset.set_format('torch',columns=['img','label'])
raw_dataset = raw_dataset.rename_column('img','image')
features = datasets.Features({
"image": datasets.Array3D(shape=(3,32,32),dtype="float32"),
"label": datasets.features.ClassLabel(names=[
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]),
})
train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
```
The error:
```python
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-54-bf29672c53ee> in <module>()
14 ]),
15 })
---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000)
2 frames
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint)
1287 test_inputs = self[:2] if batched else self[0]
1288 test_indices = [0, 1] if batched else 0
-> 1289 update_data = does_function_return_dict(test_inputs, test_indices)
1290 logger.info("Testing finished, running the mapping function on the dataset")
1291
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices)
1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns]
1259 processed_inputs = (
-> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1261 )
1262 does_return_dict = isinstance(processed_inputs, Mapping)
<ipython-input-52-b4dccbafb70d> in prepare_features(examples)
3 labels = []
4 print(examples)
----> 5 for example_idx, example in enumerate(examples["image"]):
6 if transform is not None:
7 images.append(transform(examples["image"][example_idx].permute(2,0,1)))
KeyError: 'image'
```
The print statement inside returns this:
```python
{'label': tensor([6, 9])}
```
Apparently, both `img` and `image` do not exist after renaming.
Note that this code works fine with `img` everywhere.
Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2026/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format... |
https://api.github.com/repos/huggingface/datasets/issues/1874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1874/comments | https://api.github.com/repos/huggingface/datasets/issues/1874/events | https://github.com/huggingface/datasets/pull/1874 | 807,786,094 | MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy | 1,874 | Adding Europarl Bilingual dataset | [] | closed | false | null | 7 | 2021-02-13T17:02:04Z | 2021-03-04T10:38:22Z | 2021-03-04T10:38:22Z | null | Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php).
This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences).
I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1874/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1874",
"merged_at": "2021-03-04T10:38:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1874"
} | true | [
"is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.",
"I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos",
"I... |
https://api.github.com/repos/huggingface/datasets/issues/2133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2133/comments | https://api.github.com/repos/huggingface/datasets/issues/2133/events | https://github.com/huggingface/datasets/issues/2133 | 843,149,680 | MDU6SXNzdWU4NDMxNDk2ODA= | 2,133 | bug in mlqa dataset | [] | closed | false | null | 3 | 2021-03-29T09:03:09Z | 2021-03-30T17:40:57Z | 2021-03-30T17:40:57Z | null | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?",
"\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?"
]
```
the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2133/timeline | null | completed | null | null | false | [
"If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u064... |
https://api.github.com/repos/huggingface/datasets/issues/1115 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1115/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1115/comments | https://api.github.com/repos/huggingface/datasets/issues/1115/events | https://github.com/huggingface/datasets/issues/1115 | 757,127,527 | MDU6SXNzdWU3NTcxMjc1Mjc= | 1,115 | Incorrect URL for MRQA SQuAD train subset | [] | closed | false | null | 1 | 2020-12-04T14:05:24Z | 2020-12-06T17:14:22Z | 2020-12-06T17:14:22Z | null | https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53
The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1115/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1115/timeline | null | completed | null | null | false | [
"good catch !"
] |
https://api.github.com/repos/huggingface/datasets/issues/432 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/432/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/432/comments | https://api.github.com/repos/huggingface/datasets/issues/432/events | https://github.com/huggingface/datasets/pull/432 | 665,234,340 | MDExOlB1bGxSZXF1ZXN0NDU2MzQxNDk3 | 432 | Fix handling of config files while loading datasets from multiple processes | [] | closed | false | null | 4 | 2020-07-24T15:10:57Z | 2020-08-01T17:11:42Z | 2020-07-30T08:25:28Z | null | When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in parallel, it creates a race condition when a process tries to load the file, often resulting in a JSON decoding exception because the file is only partially written.
This pull requests partially address this by comparing if the files are already identical before copying over the downloaded copy to the cached destination. There's still a race condition, but now it's less likely to occur if some basic precautions are taken by the library user, e.g., download all datasets to cache before spawning multiple processes. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/432/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/432/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/432.diff",
"html_url": "https://github.com/huggingface/datasets/pull/432",
"merged_at": "2020-07-30T08:25:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/432.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/432"
} | true | [
"Ok for this but I think we may want to use the general `filelock` method we are using at other places in the library instead of filecmp (in particular `filelock` take care of being an atomic operation which is safer for concurrent processes)",
"Ok I see.\r\nWhy not use filelock in this case then ?",
"I think w... |
https://api.github.com/repos/huggingface/datasets/issues/2159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2159/comments | https://api.github.com/repos/huggingface/datasets/issues/2159/events | https://github.com/huggingface/datasets/issues/2159 | 848,851,962 | MDU6SXNzdWU4NDg4NTE5NjI= | 2,159 | adding ccnet dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2021-04-01T23:28:36Z | 2021-04-02T10:05:19Z | 2021-04-02T10:05:19Z | null | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2159/timeline | null | completed | null | null | false | [
"closing since I think this is cc100, just the name has been changed. thanks "
] |
https://api.github.com/repos/huggingface/datasets/issues/3585 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3585/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3585/comments | https://api.github.com/repos/huggingface/datasets/issues/3585/events | https://github.com/huggingface/datasets/issues/3585 | 1,105,821,470 | I_kwDODunzps5B6X8e | 3,585 | Datasets streaming + map doesn't work for `Audio` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "cfd3d7",
"default": true,
"descript... | closed | false | null | 1 | 2022-01-17T12:55:42Z | 2022-01-20T13:28:00Z | 2022-01-20T13:28:00Z | null | ## Describe the bug
When using audio datasets in streaming mode, applying a `map(...)` before iterating leads to an error as the key `array` does not exist anymore.
## Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset("common_voice", "en", streaming=True, split="train")
def map_fn(batch):
print("audio keys", batch["audio"].keys())
batch["audio"] = batch["audio"]["array"][:100]
return batch
ds = ds.map(map_fn)
sample = next(iter(ds))
```
I think the audio is somehow decoded before `.map(...)` is actually called.
## Expected results
IMO, the above code snippet should work.
## Actual results
```bash
audio keys dict_keys(['path', 'bytes'])
Traceback (most recent call last):
File "./run_audio.py", line 15, in <module>
sample = next(iter(ds))
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/home/patrick/python_bin/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "./run_audio.py", line 9, in map_fn
batch["input"] = batch["audio"]["array"][:100]
KeyError: 'array'
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.17.1.dev0
- Platform: Linux-5.3.0-64-generic-x86_64-with-glibc2.17
- Python version: 3.8.12
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3585/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3585/timeline | null | completed | null | null | false | [
"This seems related to https://github.com/huggingface/datasets/issues/3505."
] |
https://api.github.com/repos/huggingface/datasets/issues/3905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3905/comments | https://api.github.com/repos/huggingface/datasets/issues/3905/events | https://github.com/huggingface/datasets/pull/3905 | 1,168,320,568 | PR_kwDODunzps40ZJQJ | 3,905 | Perplexity Metric Card | [] | closed | false | null | 3 | 2022-03-14T12:39:40Z | 2022-03-16T19:38:56Z | 2022-03-16T19:38:56Z | null | Add Perplexity metric card
Note that it is currently still missing the citation, but I plan to add it later today. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3905/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3905/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3905.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3905",
"merged_at": "2022-03-16T19:38:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3905.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3905"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3905). All of your documentation changes will be reflected on that endpoint.",
"I'm wondering if we should add that perplexity can be used for analyzing datasets as well",
"Otherwise, looks good! Good job, @emibaylor !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3349 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3349/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3349/comments | https://api.github.com/repos/huggingface/datasets/issues/3349/events | https://github.com/huggingface/datasets/pull/3349 | 1,067,853,601 | PR_kwDODunzps4vOF-s | 3,349 | raise exception instead of using assertions. | [] | closed | false | null | 6 | 2021-12-01T01:37:51Z | 2021-12-20T16:07:27Z | 2021-12-20T16:07:27Z | null | fix for the remaining files https://github.com/huggingface/datasets/issues/3171 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3349/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3349/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3349.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3349",
"merged_at": "2021-12-20T16:07:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3349.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3349"
} | true | [
"@mariosasko - Thanks for the review & suggestions. Updated as per the suggestions. ",
"@mariosasko - Hello, Are there any additional changes required from my end??. Wondering if this PR can be merged or still pending on additional steps.",
"@mariosasko - The approved changes in the PR now has conflicts with th... |
https://api.github.com/repos/huggingface/datasets/issues/3831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3831/comments | https://api.github.com/repos/huggingface/datasets/issues/3831/events | https://github.com/huggingface/datasets/issues/3831 | 1,160,501,000 | I_kwDODunzps5FK9cI | 3,831 | when using to_tf_dataset with shuffle is true, not all completed batches are made | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-03-06T02:43:50Z | 2022-03-08T15:18:56Z | 2022-03-08T15:18:56Z | null | ## Describe the bug
when converting a dataset to tf_dataset by using to_tf_dataset with shuffle true, the remainder is not converted to one batch
## Steps to reproduce the bug
this is the sample code below
https://colab.research.google.com/drive/1_oRXWsR38ElO1EYF9ayFoCU7Ou1AAej4?usp=sharing
## Expected results
regardless of shuffle is true or not, 67 rows dataset should be 5 batches when batch size is 16.
## Actual results
4 batches
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3831/timeline | null | completed | null | null | false | [
"Maybe @Rocketknight1 can help here",
"Hi @greenned, this is expected behaviour for `to_tf_dataset`. By default, we drop the smaller 'remainder' batch during training (i.e. when `shuffle=True`). If you really want to keep that batch, you can set `drop_remainder=False` when calling `to_tf_dataset()`.",
"@Rocketk... |
https://api.github.com/repos/huggingface/datasets/issues/3208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3208/comments | https://api.github.com/repos/huggingface/datasets/issues/3208/events | https://github.com/huggingface/datasets/pull/3208 | 1,044,504,093 | PR_kwDODunzps4uFTIs | 3,208 | Pin keras version until TF fixes its release | [] | closed | false | null | 0 | 2021-11-04T09:13:32Z | 2021-11-04T09:30:55Z | 2021-11-04T09:30:54Z | null | Fix #3207. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3208/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3208/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3208.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3208",
"merged_at": "2021-11-04T09:30:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3208.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3208"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5137/comments | https://api.github.com/repos/huggingface/datasets/issues/5137/events | https://github.com/huggingface/datasets/issues/5137 | 1,414,642,723 | I_kwDODunzps5UUbwj | 5,137 | Align task tags in dataset metadata | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 14 | 2022-10-19T09:41:42Z | 2022-11-10T05:25:58Z | 2022-10-25T06:17:00Z | null | ## Describe
Once we have agreed on a common naming for task tags for all open source projects, we should align on them.
## Steps
- [x] Align task tags in canonical datasets
- [x] task_categories: 4 datasets
- [x] task_ids (by @lhoestq)
- [x] Open PRs in community datasets
- [x] task_categories: 451 datasets
- [x] task_ids: 556 datasets
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5137/timeline | null | completed | null | null | false | [
"I removed all the invalid task_ids in datasts without namespace, based on the <s>(internal)</s> types.ts",
"(Types.ts is not internal it's public)",
"I have opened PRs to fix the task_ids in all datasets within a namespace as well.\r\n\r\nWorking on task_categories...",
"For future reference: this fix had so... |
https://api.github.com/repos/huggingface/datasets/issues/5172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5172/comments | https://api.github.com/repos/huggingface/datasets/issues/5172/events | https://github.com/huggingface/datasets/issues/5172 | 1,425,523,114 | I_kwDODunzps5U98Gq | 5,172 | Inconsistency behavior between handling local file protocol and other FS protocols | [] | open | false | null | 0 | 2022-10-27T12:03:20Z | 2022-10-27T12:05:19Z | null | null | ### Describe the bug
These lines us used during load_from_disk:
```
if is_remote_filesystem(fs):
dest_dataset_dict_path = extract_path_from_uri(dataset_dict_path)
else:
fs = fsspec.filesystem("file")
dest_dataset_dict_path = dataset_dict_path
```
If a local FS is given, then it will the URL as the path name. If a remote Fs is given, then it will use the path of the URL. This is an inconsistent behavior when handling a file: when using remote FS, you must write a URL, but for local FS, even if you passed LocalFileSystem as `fs` you still can't use a `file://` URL. It will be recognized as a directory named `file:`.
### Steps to reproduce the bug
```
import fsspec.core
url = "hdfs:///somewhere/MNIST"
# url = "file:///somewhere/MNIST"
fs, path = fsspec.core.url_to_fs(url)
fs.ls(path) # this will always work
load_from_disk(path, fs) # only works for local FS
load_from_disk(url, fs) # only works for remote FS
```
### Expected behavior
one of `url` or `path` should always work
I think we extract path from given URL by using `fsspec.core.url_to_fs` instead of using `is_remote_filesystem` and `extract_path_from_uri` will fix this, since:
```
fsspec.core.url_to_fs("/somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("file:///somewhere/MNIST") -> LocalFs, '/somewhere/MNIST'
fsspec.core.url_to_fs("hdfs:///somewhere/MNIST") -> HDFS, '/somewhere/MNIST'
```
and
```
fsspec.core.url_to_fs("file:///somewhere/MNIST") == fsspec.core.url_to_fs("/somewhere/MNIST")
```
In theory, this wouldn't break anything, since giving local path and remote uri still works. It will only affect local URI (make it works too)
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-5.4.205.1**HIDDEN**
- Python version: 3.7.10
- PyArrow version: 8.0.0
- Pandas version: 1.2.4
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5172/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5172/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2954/comments | https://api.github.com/repos/huggingface/datasets/issues/2954/events | https://github.com/huggingface/datasets/pull/2954 | 1,003,904,803 | PR_kwDODunzps4sHa8O | 2,954 | Run tests in parallel | [] | closed | false | null | 2 | 2021-09-22T07:00:44Z | 2021-09-28T06:55:51Z | 2021-09-28T06:55:51Z | null | Run CI tests in parallel to speed up the test suite.
Speed up results:
- Linux: from `7m 30s` to `5m 32s`
- Windows: from `13m 52s` to `11m 10s`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2954/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2954",
"merged_at": "2021-09-28T06:55:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2954"
} | true | [
"There is a speed up in Windows machines:\r\n- From `13m 52s` to `11m 10s`\r\n\r\nIn Linux machines, some workers crash with error message:\r\n```\r\nOSError: [Errno 12] Cannot allocate memory\r\n```",
"There is also a speed up in Linux machines:\r\n- From `7m 30s` to `5m 32s`"
] |
https://api.github.com/repos/huggingface/datasets/issues/5096 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5096/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5096/comments | https://api.github.com/repos/huggingface/datasets/issues/5096/events | https://github.com/huggingface/datasets/issues/5096 | 1,403,379,816 | I_kwDODunzps5TpeBo | 5,096 | Transfer some canonical datasets under an organization namespace | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | open | false | null | 2 | 2022-10-10T15:44:31Z | 2023-06-07T07:51:54Z | null | null | As discussed during our @huggingface/datasets meeting, we are planning to move some "canonical" dataset scripts under their corresponding organization namespace (if this does not exist).
On the contrary, if the dataset already exists under the organization namespace, we are deprecating the canonical one (and eventually delete it).
First, we should test it using a dummy dataset/organization.
TODO:
- [x] Test with a dummy dataset
- [x] Create dummy canonical dataset: https://huggingface.co/datasets/dummy_canonical_dataset
- [x] Create dummy organization: https://huggingface.co/dummy-canonical-org
- [x] Transfer dummy canonical dataset to dummy organization
- [ ] Transfer datasets
- [x] babi_qa => facebook
- [x] cord19 => allenai
- [x] emotion => dair-ai
- [ ] gem => GEM
- [x] hendrycks_test => cais/mmlu
- [x] indonlu => indonlp
- [ ] multilingual_librispeech => facebook
- It already exists "facebook/multilingual_librispeech"
- [ ] oscar => oscar-corpus
- [x] peer_read => allenai
- [x] qasper => allenai
- [x] reddit => webis/tldr-17
- [x] russian_super_glue => russiannlp
- [x] rvl_cdip => aharley
- [x] s2orc => allenai
- [x] scicite => allenai
- [x] scifact => allenai
- [x] scitldr => allenai
- [x] swiss_judgment_prediction => rcds
- [x] the_pile => EleutherAI
- [ ] wmt14, wmt15, wmt16, wmt17, wmt18, wmt19,... => wmt
- [ ] Deprecate (and eventually remove) datasets that cannot be transferred because they already exist
- [x] banking77 => PolyAI
- [x] common_voice => mozilla-foundation
- [x] german_legal_entity_recognition => elenanereiss
- ...
EDIT: the list above is continuously being updated | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5096/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5096/timeline | null | null | null | null | false | [
"The transfer of the dummy dataset to the dummy org works as expected:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"dummy_canonical_dataset\", download_mode=\"force_redownload\"); ds\r\nDownloading builder script: 100%|███████████████████████████████████████████████████████████████... |
https://api.github.com/repos/huggingface/datasets/issues/1139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1139/comments | https://api.github.com/repos/huggingface/datasets/issues/1139/events | https://github.com/huggingface/datasets/pull/1139 | 757,393,158 | MDExOlB1bGxSZXF1ZXN0NTMyNzc3OTg2 | 1,139 | Add ReFreSD dataset | [] | closed | false | null | 3 | 2020-12-04T20:45:11Z | 2020-12-16T16:01:18Z | 2020-12-16T16:01:18Z | null | This PR adds the **ReFreSD dataset**.
The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.
Need feedback on:
- I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose this is the problem. I'm sure there is a simple trick to make this work.
- The feature names.
- I don't know if it's better to stick to the classic `sentence1`, `sentence2` or to `sentence_en`, `sentence_fr` to be more explicit.
- There is a binary label (called `label`, no problem here), and a 3-class label called `#3_labels` in the original tsv. I changed it to `all_labels` but I'm sure there is better.
- The rationales are lists of integers, extracted as a string at first. I wonder what's the best way to treat them, any idea? Also, I couldn't manage to make a `Sequence` of `int8` but I'm sure I've missed something simple.
Thanks in advance | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1139/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1139.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1139",
"merged_at": "2020-12-16T16:01:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1139.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1139"
} | true | [
"Cool dataset! Replying in-line:\r\n\r\n> This PR adds the **ReFreSD dataset**.\r\n> The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.\r\n> \r\n> Need feedback on:\r\n> \r\n> * I couldn't generate the dummy data. The ... |
https://api.github.com/repos/huggingface/datasets/issues/3688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3688/comments | https://api.github.com/repos/huggingface/datasets/issues/3688/events | https://github.com/huggingface/datasets/issues/3688 | 1,127,218,321 | I_kwDODunzps5DL_yR | 3,688 | Pyarrow version error | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-02-08T12:53:59Z | 2022-02-09T06:35:33Z | 2022-02-09T06:35:32Z | null | ## Describe the bug
I installed datasets(version 1.17.0, 1.18.0, 1.18.3) but i'm right now nor able to import it because of pyarrow. when i try to import it, i get the following error:
`To use datasets, the module pyarrow>=3.0.0 is required, and the current version of pyarrow doesn't match this condition`.
i tryed with all version of pyarrow execpt `4.0.0` but still get the same error.
## Steps to reproduce the bug
```python
import datasets
```
## Expected results
A clear and concise description of the expected results.
## Actual results
AttributeError Traceback (most recent call last)
<ipython-input-19-652e886d387f> in <module>
----> 1 import datasets
~\AppData\Local\Continuum\anaconda3\lib\site-packages\datasets\__init__.py in <module>
26
27
---> 28 if _version.parse(pyarrow.__version__).major < 3:
29 raise ImportWarning(
30 "To use `datasets`, the module `pyarrow>=3.0.0` is required, and the current version of `pyarrow` doesn't match this condition.\n"
AttributeError: 'Version' object has no attribute 'major'
## Environment info
Traceback (most recent call last):
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\Alex\AppData\Local\Continuum\anaconda3\Scripts\datasets-cli.exe\__main__.py", line 5, in <module>
File "c:\users\alex\appdata\local\continuum\anaconda3\lib\site-packages\datasets\__init__.py", line 28, in <module>
if _version.parse(pyarrow.__version__).major < 3:
AttributeError: 'Version' object has no attribute 'major'
- `datasets` version:
- Platform: Linux(Ubuntu) and Windows: conda on the both
- Python version: 3.7
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3688/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3688/timeline | null | completed | null | null | false | [
"Hi @Zaker237, thanks for reporting.\r\n\r\nThis is weird: the error you get is only thrown if the installed pyarrow version is less than 3.0.0.\r\n\r\nCould you please check that you install pyarrow in the same Python virtual environment where you installed datasets?\r\n\r\nFrom the Python command line (or termina... |
https://api.github.com/repos/huggingface/datasets/issues/2128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2128/comments | https://api.github.com/repos/huggingface/datasets/issues/2128/events | https://github.com/huggingface/datasets/issues/2128 | 843,023,910 | MDU6SXNzdWU4NDMwMjM5MTA= | 2,128 | Dialogue action slot name and value are reversed in MultiWoZ 2.2 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2021-03-29T06:34:02Z | 2021-03-31T12:48:01Z | 2021-03-31T12:48:01Z | null | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2128/timeline | null | completed | null | null | false | [
"Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) "
] |
https://api.github.com/repos/huggingface/datasets/issues/1194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1194/comments | https://api.github.com/repos/huggingface/datasets/issues/1194/events | https://github.com/huggingface/datasets/pull/1194 | 757,880,647 | MDExOlB1bGxSZXF1ZXN0NTMzMTY0MDcz | 1,194 | Add msr_text_compression | [] | closed | false | null | 1 | 2020-12-06T09:06:11Z | 2020-12-09T10:53:45Z | 2020-12-09T10:53:45Z | null | Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1194/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1194",
"merged_at": "2020-12-09T10:53:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1194"
} | true | [
"the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.