url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/3566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3566/comments | https://api.github.com/repos/huggingface/datasets/issues/3566/events | https://github.com/huggingface/datasets/pull/3566 | 1,100,155,902 | PR_kwDODunzps4w2Tcc | 3,566 | Add initial electricity time series dataset | [] | closed | false | null | 2 | 2022-01-12T10:21:32Z | 2022-02-15T13:31:48Z | 2022-02-15T13:31:48Z | null | Here is an initial prototype time series dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3566/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3566.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3566",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3566.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3566"
} | true | [
"@kashif Some commits on the PR branch are not authored by you, so could you please open a new PR and not use rebase this time :)? You can copy and paste the dataset dir to the new branch. \r\n\r\n",
"making a new PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/5433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5433/comments | https://api.github.com/repos/huggingface/datasets/issues/5433/events | https://github.com/huggingface/datasets/issues/5433 | 1,536,017,901 | I_kwDODunzps5bjcXt | 5,433 | Support latest Docker image in CI benchmarks | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 3 | 2023-01-17T09:06:08Z | 2023-01-18T06:29:08Z | 2023-01-18T06:29:08Z | null | Once we find out the root cause of:
- #5431
we should revert the temporary pin on the Docker image version introduced by:
- #5432 | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5433/timeline | null | completed | null | null | false | [
"Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.",
"Opened htt... |
https://api.github.com/repos/huggingface/datasets/issues/2746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2746/comments | https://api.github.com/repos/huggingface/datasets/issues/2746/events | https://github.com/huggingface/datasets/issues/2746 | 958,551,619 | MDU6SXNzdWU5NTg1NTE2MTk= | 2,746 | Cannot load `few-nerd` dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2021-08-02T22:18:57Z | 2021-11-16T08:51:34Z | 2021-08-03T19:45:43Z | null | ## Describe the bug
Cannot load `few-nerd` dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset('few-nerd', 'supervised')
```
## Actual results
Executing above code will give the following error:
```
Using the latest cached version of the module from /Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53 (last modified on Wed Jun 2 11:34:25 2021) since it couldn't be found locally at /Users/Mehrad/Documents/GitHub/genienlp/few-nerd/few-nerd.py, or remotely (FileNotFoundError).
Downloading and preparing dataset few_nerd/supervised (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /Users/Mehrad/.cache/huggingface/datasets/few_nerd/supervised/0.0.0/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53...
Traceback (most recent call last):
File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 693, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1107, in _prepare_split
disable=bool(logging.get_verbosity() == logging.NOTSET),
File "/Users/Mehrad/opt/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__
for obj in iterable:
File "/Users/Mehrad/.cache/huggingface/modules/datasets_modules/datasets/few-nerd/62464ace912a40a0f33a11a8310f9041c9dc3590ff2b3c77c14d83ca53cfec53/few-nerd.py", line 196, in _generate_examples
with open(filepath, encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/Users/Mehrad/.cache/huggingface/datasets/downloads/supervised/train.json'
```
The bug is probably in identifying and downloading the dataset. If I download the json splits directly from [link](https://github.com/nbroad1881/few-nerd/tree/main/uncompressed) and put them under the downloads directory, they will be processed into arrow format correctly.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Python version: 3.8
- PyArrow version: 1.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2746/timeline | null | completed | null | null | false | [
"Hi @Mehrad0711,\r\n\r\nI'm afraid there is no \"canonical\" Hugging Face dataset named \"few-nerd\".\r\n\r\nThere are 2 kinds of datasets hosted at the Hugging Face Hub:\r\n- canonical datasets (their identifier contains no slash \"/\"): we, the Hugging Face team, supervise their implementation and we make sure th... |
https://api.github.com/repos/huggingface/datasets/issues/5341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5341/comments | https://api.github.com/repos/huggingface/datasets/issues/5341/events | https://github.com/huggingface/datasets/pull/5341 | 1,484,376,644 | PR_kwDODunzps5Exohx | 5,341 | Remove tasks.json | [] | closed | false | null | 1 | 2022-12-08T11:04:35Z | 2022-12-09T12:26:21Z | 2022-12-09T12:23:20Z | null | After discussions in https://github.com/huggingface/datasets/pull/5335 we should remove this file that is not used anymore. We should update https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts instead. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5341/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5341.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5341",
"merged_at": "2022-12-09T12:23:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5341.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5341"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3467 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3467/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3467/comments | https://api.github.com/repos/huggingface/datasets/issues/3467/events | https://github.com/huggingface/datasets/pull/3467 | 1,085,870,665 | PR_kwDODunzps4wIoqd | 3,467 | Push dataset infos.json to Hub | [] | closed | false | null | 1 | 2021-12-21T14:07:13Z | 2021-12-21T17:00:10Z | 2021-12-21T17:00:09Z | null | When doing `push_to_hub`, the feature types are lost (see issue https://github.com/huggingface/datasets/issues/3394).
This PR fixes this by also pushing a `dataset_infos.json` file to the Hub, that stores the feature types.
Other minor changes:
- renamed the `___` separator to `--`, since `--` is now disallowed in a name in the back-end.
I tested this feature with datasets like conll2003 that has feature types like `ClassLabel` that were previously lost.
Close https://github.com/huggingface/datasets/issues/3394
I would like to include this in today's release (though not mandatory), so feel free to comment/suggest changes | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3467/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3467/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3467.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3467",
"merged_at": "2021-12-21T17:00:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3467.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3467"
} | true | [
"The change from `___` to `--` was allowed by https://github.com/huggingface/moon-landing/pull/1657"
] |
https://api.github.com/repos/huggingface/datasets/issues/1826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1826/comments | https://api.github.com/repos/huggingface/datasets/issues/1826/events | https://github.com/huggingface/datasets/pull/1826 | 802,074,744 | MDExOlB1bGxSZXF1ZXN0NTY4Mjc4OTI2 | 1,826 | Print error message with filename when malformed CSV | [] | closed | false | null | 0 | 2021-02-05T11:07:59Z | 2021-02-09T17:39:27Z | 2021-02-09T17:39:27Z | null | Print error message specifying filename when malformed CSV file.
Close #1821 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1826/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1826",
"merged_at": "2021-02-09T17:39:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1826"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1311/comments | https://api.github.com/repos/huggingface/datasets/issues/1311/events | https://github.com/huggingface/datasets/pull/1311 | 759,514,819 | MDExOlB1bGxSZXF1ZXN0NTM0NTA3NjM1 | 1,311 | Add OPUS Bible Corpus (102 Languages) | [] | closed | false | null | 1 | 2020-12-08T14:57:08Z | 2020-12-09T15:30:57Z | 2020-12-09T15:30:56Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1311/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1311.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1311",
"merged_at": "2020-12-09T15:30:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1311.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1311"
} | true | [
"@lhoestq done"
] | |
https://api.github.com/repos/huggingface/datasets/issues/1974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1974/comments | https://api.github.com/repos/huggingface/datasets/issues/1974/events | https://github.com/huggingface/datasets/pull/1974 | 820,122,223 | MDExOlB1bGxSZXF1ZXN0NTgzMTE5MDI0 | 1,974 | feat(docs): navigate with left/right arrow keys | [] | closed | false | null | 0 | 2021-03-02T15:24:50Z | 2021-03-04T10:44:12Z | 2021-03-04T10:42:48Z | null | Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot.
More info : https://github.com/sphinx-doc/sphinx/pull/2064
You can try here : https://29353-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1974/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1974",
"merged_at": "2021-03-04T10:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1974"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/120/comments | https://api.github.com/repos/huggingface/datasets/issues/120/events | https://github.com/huggingface/datasets/issues/120 | 618,737,783 | MDU6SXNzdWU2MTg3Mzc3ODM= | 120 | 🐛 `map` not working | [] | closed | false | null | 1 | 2020-05-15T06:43:08Z | 2020-05-15T07:02:38Z | 2020-05-15T07:02:38Z | null | I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def test(sample):
sample['title'] = "test prefix @@@ " + sample["title"]
return sample
print(dataset[0]['title'])
dataset.map(test)
print(dataset[0]['title'])
```
Output :
> Super_Bowl_50
Super_Bowl_50
Expected output :
> Super_Bowl_50
test prefix @@@ Super_Bowl_50 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/120/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/120/timeline | null | completed | null | null | false | [
"I didn't assign the output 🤦♂️\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/5077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5077/comments | https://api.github.com/repos/huggingface/datasets/issues/5077/events | https://github.com/huggingface/datasets/pull/5077 | 1,398,080,859 | PR_kwDODunzps5AOs9L | 5,077 | Fix passed download_config in HubDatasetModuleFactoryWithoutScript | [] | closed | false | null | 1 | 2022-10-05T16:42:36Z | 2022-10-06T05:31:22Z | 2022-10-06T05:29:06Z | null | Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5077/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5077.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5077",
"merged_at": "2022-10-06T05:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5077.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5077"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/637/comments | https://api.github.com/repos/huggingface/datasets/issues/637/events | https://github.com/huggingface/datasets/pull/637 | 703,539,909 | MDExOlB1bGxSZXF1ZXN0NDg4NjMwNzk4 | 637 | Add MATINF | [] | closed | false | null | 0 | 2020-09-17T12:24:53Z | 2020-09-17T13:23:18Z | 2020-09-17T13:23:17Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/637/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/637.diff",
"html_url": "https://github.com/huggingface/datasets/pull/637",
"merged_at": "2020-09-17T13:23:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/637.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/637"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/6033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6033/comments | https://api.github.com/repos/huggingface/datasets/issues/6033/events | https://github.com/huggingface/datasets/issues/6033 | 1,804,482,051 | I_kwDODunzps5rjjYD | 6,033 | `map` function doesn't fully utilize `input_columns`. | [] | closed | false | null | 0 | 2023-07-14T08:49:28Z | 2023-07-14T09:16:04Z | 2023-07-14T09:16:04Z | null | ### Describe the bug
I wanted to select only some columns of data.
And I thought that's why the argument `input_columns` exists.
What I expected is like this:
If there are ["a", "b", "c", "d"] columns, and if I set `input_columns=["a", "d"]`, the data will have only ["a", "d"] columns.
But it doesn't select columns.
It preserves existing columns.
The main cause is `update` function of `dictionary` type `transformed_batch`.
https://github.com/huggingface/datasets/blob/682d21e94ab1e64c11b583de39dc4c93f0101c5a/src/datasets/iterable_dataset.py#L687-L691
`transformed_batch` gets all the columns by `transformed_batch = dict(batch)`.
Even `function_args` selects `input_columns`, `update` preserves columns other than `input_columns`.
I think it should take a new dictionary with columns in `input_columns` like this:
```
# transformed_batch = dict(batch)
# transformed_batch.update(self.function(*function_args, **self.fn_kwargs)
# This is what I think correct.
transformed_batch = self.function(*function_args, **self.fn_kwargs)
```
Let me know how to use `input_columns`.
### Steps to reproduce the bug
Described all above.
### Expected behavior
Described all above.
### Environment info
datasets: 2.12
python: 3.8 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6033/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1163/comments | https://api.github.com/repos/huggingface/datasets/issues/1163/events | https://github.com/huggingface/datasets/pull/1163 | 757,711,340 | MDExOlB1bGxSZXF1ZXN0NTMzMDM4Mzc3 | 1,163 | Added memat : Xhosa-English parallel corpora | [] | closed | false | null | 2 | 2020-12-05T16:08:50Z | 2020-12-07T10:40:24Z | 2020-12-07T10:40:24Z | null | Added memat : Xhosa-English parallel corpora
for more info : http://opus.nlpl.eu/memat.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1163/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1163/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1163.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1163",
"merged_at": "2020-12-07T10:40:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1163.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1163"
} | true | [
"The `RemoteDatasetTest` CI fail is fixed on master so it's fine",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/5908 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5908/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5908/comments | https://api.github.com/repos/huggingface/datasets/issues/5908/events | https://github.com/huggingface/datasets/issues/5908 | 1,728,653,935 | I_kwDODunzps5nCSpv | 5,908 | Unbearably slow sorting on big mapped datasets | [] | open | false | null | 6 | 2023-05-27T11:08:32Z | 2023-06-13T17:45:10Z | null | null | ### Describe the bug
For me, with ~40k lines, sorting took 3.5 seconds on a flattened dataset (including the flatten operation) and 22.7 seconds on a mapped dataset (right after sharding), which is about x5 slowdown. Moreover, it seems like it slows down exponentially with bigger datasets (wasn't able to sort 700k lines at all, with flattening takes about a minute).
### Steps to reproduce the bug
```Python
from datasets import load_dataset
import time
dataset = load_dataset("xnli", "en", split="train")
dataset = dataset.shard(10, 0)
print(len(dataset))
t = time.time()
# dataset = dataset.flatten_indices() # uncomment this line and it's fast
dataset = dataset.sort("label", reverse=True, load_from_cache_file=False)
print(f"finished in {time.time() - t:.4f} seconds")
```
### Expected behavior
Expect sorting to take the same or less time than flattening and then sorting.
### Environment info
- `datasets` version: 2.12.1.dev0 (same with 2.12.0 too)
- Platform: Windows-10-10.0.22621-SP0
- Python version: 3.10.10
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5908/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5908/timeline | null | null | null | null | false | [
"Hi ! `shard` currently returns a slow dataset by default, with examples evenly distributed in the dataset.\r\n\r\nYou can get a fast dataset using `contiguous=True` (which should be the default imo):\r\n\r\n```python\r\ndataset = dataset.shard(10, 0, contiguous=True)\r\n```\r\n\r\nThis way you don't need to flatte... |
https://api.github.com/repos/huggingface/datasets/issues/2207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2207/comments | https://api.github.com/repos/huggingface/datasets/issues/2207/events | https://github.com/huggingface/datasets/issues/2207 | 855,267,383 | MDU6SXNzdWU4NTUyNjczODM= | 2,207 | making labels consistent across the datasets | [] | closed | false | null | 2 | 2021-04-11T10:03:56Z | 2022-06-01T16:23:08Z | 2022-06-01T16:21:10Z | null | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if one try to access as above they are entailment, neutral,contradiction,
it would be great to have the labels consistent.
thanks
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2207/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2207/timeline | null | completed | null | null | false | [
"Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features... |
https://api.github.com/repos/huggingface/datasets/issues/5729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5729/comments | https://api.github.com/repos/huggingface/datasets/issues/5729/events | https://github.com/huggingface/datasets/pull/5729 | 1,661,929,923 | PR_kwDODunzps5N_pvI | 5,729 | Fix nondeterministic sharded data split order | [] | closed | false | null | 3 | 2023-04-11T07:34:20Z | 2023-04-26T15:12:25Z | 2023-04-26T15:05:12Z | null | This PR makes the order of the split names deterministic. Before it was nondeterministic because we were iterating over `set` elements.
Fix #5728. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5729/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5729.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5729",
"merged_at": "2023-04-26T15:05:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5729.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5729"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The error in the CI was unrelated to this PR. I have merged main branch once that has been fixed.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### B... |
https://api.github.com/repos/huggingface/datasets/issues/3343 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3343/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3343/comments | https://api.github.com/repos/huggingface/datasets/issues/3343/events | https://github.com/huggingface/datasets/pull/3343 | 1,067,505,507 | PR_kwDODunzps4vM8yB | 3,343 | Better error message when download fails | [] | closed | false | null | 0 | 2021-11-30T17:38:50Z | 2021-12-01T11:27:59Z | 2021-12-01T11:27:58Z | null | From our discussions in https://github.com/huggingface/datasets/issues/3269 and https://github.com/huggingface/datasets/issues/3282 it would be nice to have better messages if a download fails.
In particular the error now shows:
- the error from the HEAD request if there's one
- otherwise the response code of the HEAD request
I also added an error to tell users to pass `use_auth_token` when the Hugging Face Hub returns 401 (Unauthorized).
While paying around with this I also fixed a minor issue with the `force_download` parameter that was not always taken into account | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3343/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3343/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3343.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3343",
"merged_at": "2021-12-01T11:27:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3343.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3343"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1609/comments | https://api.github.com/repos/huggingface/datasets/issues/1609/events | https://github.com/huggingface/datasets/issues/1609 | 771,421,881 | MDU6SXNzdWU3NzE0MjE4ODE= | 1,609 | Not able to use 'jigsaw_toxicity_pred' dataset | [] | closed | false | null | 2 | 2020-12-19T17:35:48Z | 2020-12-22T16:42:24Z | 2020-12-22T16:42:23Z | null | When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing):
```
from datasets import list_datasets, list_metrics, load_dataset, load_metric
ds = load_dataset("jigsaw_toxicity_pred")
```
I see below error:
> FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py
During handling of the above exception, another exception occurred:
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, **download_kwargs)
280 raise FileNotFoundError(
281 "Couldn't find file locally at {}, or remotely at {} or {}".format(
--> 282 combined_path, github_file_path, file_path
283 )
284 )
FileNotFoundError: Couldn't find file locally at jigsaw_toxicity_pred/jigsaw_toxicity_pred.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/jigsaw_toxicity_pred/jigsaw_toxicity_pred.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1609/timeline | null | completed | null | null | false | [
"Hi @jassimran,\r\nThe `jigsaw_toxicity_pred` dataset has not been released yet, it will be available with version 2 of `datasets`, coming soon.\r\nYou can still access it by installing the master (unreleased) version of datasets directly :\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n... |
https://api.github.com/repos/huggingface/datasets/issues/1136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1136/comments | https://api.github.com/repos/huggingface/datasets/issues/1136/events | https://github.com/huggingface/datasets/pull/1136 | 757,341,607 | MDExOlB1bGxSZXF1ZXN0NTMyNzM0MzQ4 | 1,136 | minor change in description in paws-x.py and updated dataset_infos | [] | closed | false | null | 0 | 2020-12-04T19:17:49Z | 2020-12-06T18:02:57Z | 2020-12-06T18:02:57Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1136/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1136/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1136.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1136",
"merged_at": "2020-12-06T18:02:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1136.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1136"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/715/comments | https://api.github.com/repos/huggingface/datasets/issues/715/events | https://github.com/huggingface/datasets/pull/715 | 714,690,192 | MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2 | 715 | Use python read for text dataset | [] | closed | false | null | 7 | 2020-10-05T09:47:55Z | 2020-10-05T13:13:18Z | 2020-10-05T13:13:17Z | null | As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file.
Instead I switched to pure python using `open` and `read`.
From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 3,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/715/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/715",
"merged_at": "2020-10-05T13:13:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/715"
} | true | [
"One thing though, could we try to read the files in parallel?",
"We could but I'm not sure this would help a lot since the bottleneck is the drive IO if the files are big enough.\r\nIt could make sense for very small files.",
"Looks like windows is not a big fan of this approach\r\nI'm working on a fix",
"I ... |
https://api.github.com/repos/huggingface/datasets/issues/725 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/725/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/725/comments | https://api.github.com/repos/huggingface/datasets/issues/725/events | https://github.com/huggingface/datasets/pull/725 | 718,985,641 | MDExOlB1bGxSZXF1ZXN0NTAxMjUxODI1 | 725 | pretty print dataset objects | [] | closed | false | null | 2 | 2020-10-12T02:03:46Z | 2020-10-23T16:24:35Z | 2020-10-23T09:00:46Z | null | Currently, if I do:
```
from datasets import load_dataset
load_dataset("wikihow", 'all', data_dir="/hf/pegasus-datasets/wikihow/")
```
I get:
```
DatasetDict({'train': Dataset(features: {'text': Value(dtype='string', id=None),
'headline': Value(dtype='string', id=None), 'title': Value(dtype='string',
id=None)}, num_rows: 157252), 'validation': Dataset(features: {'text':
Value(dtype='string', id=None), 'headline': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None)}, num_rows: 5599), 'test':
Dataset(features: {'text': Value(dtype='string', id=None), 'headline':
Value(dtype='string', id=None), 'title': Value(dtype='string', id=None)},
num_rows: 5577)})
```
This is not very readable.
Can we either have a better `__repr__` or have a custom method to nicely pprint the dataset object?
Here is my very simple attempt. With this PR, it produces:
```
DatasetDict({
train: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 157252
})
validation: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5599
})
test: Dataset({
features: ['text', 'headline', 'title'],
num_rows: 5577
})
})
```
I did omit the data types on purpose to make it more readable, but it shouldn't be too difficult to integrate those too.
note that this PR also fixes the inconsistency in output that in master misses enclosing `{}` for Dataset, but it is there for `DatasetDict` - or perhaps it was by design.
I'm totally not attached to this format, just wanting something more readable. One approach could be to serialize to `json.dumps` or something similar. It'd make the indentation simpler.
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/725/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/725/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/725.diff",
"html_url": "https://github.com/huggingface/datasets/pull/725",
"merged_at": "2020-10-23T09:00:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/725.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/725"
} | true | [
"Great, as you found it useful I improved the code a bit to automate indentation in the parent class, so that the child repr doesn't need to guess the indentation level, while repr'ing nicely on its own.\r\n\r\n- do we want indent=4 or 2?\r\n- do we want `{` ... `}` or w/o?\r\n\r\ncurrently it's indent4 and w/ curl... |
https://api.github.com/repos/huggingface/datasets/issues/2483 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2483/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2483/comments | https://api.github.com/repos/huggingface/datasets/issues/2483/events | https://github.com/huggingface/datasets/pull/2483 | 918,871,712 | MDExOlB1bGxSZXF1ZXN0NjY4MjU1Mjg1 | 2,483 | Use gc.collect only when needed to avoid slow downs | [] | closed | false | null | 2 | 2021-06-11T15:09:30Z | 2021-06-18T19:25:06Z | 2021-06-11T15:31:36Z | null | In https://github.com/huggingface/datasets/commit/42320a110d9d072703814e1f630a0d90d626a1e6 we added a call to gc.collect to resolve some issues on windows (see https://github.com/huggingface/datasets/pull/2482)
However calling gc.collect too often causes significant slow downs (the CI run time doubled).
So I just moved the gc.collect call to the exact place where it's actually needed: when post-processing a dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2483/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2483/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2483.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2483",
"merged_at": "2021-06-11T15:31:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2483.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2483"
} | true | [
"I continue thinking that the origin of the issue has to do with tqdm (and not with Arrow): this issue only arises for version 4.50.0 (and later) of tqdm, not for previous versions of tqdm.\r\n\r\nMy guess is that tqdm made a change from version 4.50.0 that does not properly release the iterable. ",
"FR"
] |
https://api.github.com/repos/huggingface/datasets/issues/1234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1234/comments | https://api.github.com/repos/huggingface/datasets/issues/1234/events | https://github.com/huggingface/datasets/pull/1234 | 758,229,304 | MDExOlB1bGxSZXF1ZXN0NTMzNDM0ODkz | 1,234 | Added ade_corpus_v2, with 3 configs for relation extraction and classification task | [] | closed | false | null | 3 | 2020-12-07T07:05:14Z | 2020-12-14T17:49:14Z | 2020-12-14T17:49:14Z | null | Adverse Drug Reaction Data: ADE-Corpus-V2 dataset added configs for different tasks with given data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1234/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1234",
"merged_at": "2020-12-14T17:49:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1234"
} | true | [
"@lhoestq I have added the tags they are in separate files for 3 different configs",
"@lhoestq thanks for the review I added your suggested changes.",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4811/comments | https://api.github.com/repos/huggingface/datasets/issues/4811/events | https://github.com/huggingface/datasets/issues/4811 | 1,333,043,421 | I_kwDODunzps5PdKDd | 4,811 | Bug in function validate_type for Python >= 3.9 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-08-09T10:25:21Z | 2022-08-12T13:27:05Z | 2022-08-12T13:27:05Z | null | ## Describe the bug
The function `validate_type` assumes that the type `typing.Optional[str]` is automatically transformed to `typing.Union[str, NoneType]`.
```python
In [4]: typing.Optional[str]
Out[4]: typing.Union[str, NoneType]
```
However, this is not the case for Python 3.9:
```python
In [3]: typing.Optional[str]
Out[3]: typing.Optional[str]
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4811/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1708/comments | https://api.github.com/repos/huggingface/datasets/issues/1708/events | https://github.com/huggingface/datasets/issues/1708 | 781,631,455 | MDU6SXNzdWU3ODE2MzE0NTU= | 1,708 | <html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | [] | closed | false | null | 0 | 2021-01-07T21:45:24Z | 2021-01-08T09:00:01Z | 2021-01-08T09:00:01Z | null | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons to have this dataset*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1708/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1708/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6002 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6002/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6002/comments | https://api.github.com/repos/huggingface/datasets/issues/6002/events | https://github.com/huggingface/datasets/pull/6002 | 1,786,053,060 | PR_kwDODunzps5UhP-Z | 6,002 | Add KLUE-MRC metrics | [] | closed | false | null | 1 | 2023-07-03T12:11:10Z | 2023-07-09T11:57:20Z | 2023-07-09T11:57:20Z | null | ## Metrics for KLUE-MRC (Korean Language Understanding Evaluation — Machine Reading Comprehension)
Adding metrics for [KLUE-MRC](https://huggingface.co/datasets/klue).
KLUE-MRC is very similar to SQuAD 2.0 but has a slightly different format which is why I added metrics for KLUE-MRC.
Specifically, in the case of [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness), it leverages the scoring script of SQuAD to evaluate SQuAD 2.0 and KorQuAD. But the script isn't suitable for KLUE-MRC because KLUE-MRC is a bit different from SQuAD 2.0. And this is why I added the scoring script for KLUE-MRC.
- [x] All tests passed
- [x] Added a metric card (referred the metric card of SQuAD 2.0)
- [x] Compatibility test with [LM Eval Harness](https://github.com/EleutherAI/lm-evaluation-harness) passed
### References
- [KLUE: Korean Language Understanding Evaluation](https://datasets-benchmarks-proceedings.neurips.cc/paper_files/paper/2021/file/98dce83da57b0395e163467c9dae521b-Paper-round2.pdf)
- [KLUE on Hugging Face Datasets](https://huggingface.co/datasets/klue)
- #2416 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6002/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6002/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6002.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6002",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6002.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6002"
} | true | [
"The metrics API in `datasets` is deprecated as of version 2.0, and `evaulate` is our new library for metrics. You can add a new metric to it by following [these steps](https://huggingface.co/docs/evaluate/creating_and_sharing)."
] |
https://api.github.com/repos/huggingface/datasets/issues/4321 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4321/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4321/comments | https://api.github.com/repos/huggingface/datasets/issues/4321/events | https://github.com/huggingface/datasets/pull/4321 | 1,233,273,351 | PR_kwDODunzps43ryW7 | 4,321 | Adding dataset enwik8 | [] | closed | false | null | 2 | 2022-05-11T23:25:02Z | 2022-06-01T14:27:30Z | 2022-06-01T14:04:06Z | null | Because I regularly work with enwik8, I would like to contribute the dataset loader 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4321/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4321/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4321.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4321",
"merged_at": "2022-06-01T14:04:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4321.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4321"
} | true | [
"@lhoestq Thank you for the great feedback! Looks like all tests are passing now :)",
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2967/comments | https://api.github.com/repos/huggingface/datasets/issues/2967/events | https://github.com/huggingface/datasets/issues/2967 | 1,007,194,837 | I_kwDODunzps48CJLV | 2,967 | Adding vision-and-language datasets (e.g., VQA, VCR) to Datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-09-25T20:58:15Z | 2021-10-03T20:34:22Z | 2021-10-03T20:34:22Z | null | **Is your feature request related to a problem? Please describe.**
Would you like to add any vision-and-language datasets (e.g., VQA, VCR) to Huggingface Datasets?
**Describe the solution you'd like**
N/A
**Describe alternatives you've considered**
N/A
**Additional context**
This is Da Yin at UCLA. Recently, we have published an EMNLP 2021 paper about geo-diverse visual commonsense reasoning (https://arxiv.org/abs/2109.06860). We propose a new dataset called GD-VCR, a vision-and-language dataset to evaluate how well V&L models perform on scenarios involving geo-location-specific commonsense. We hope to have our V&L dataset incorporated into Huggingface to further promote our project, but I haven't seen much V&L datasets in the current package. Is it possible to add V&L datasets, and if so, how should we prepare for the loading? Thank you very much!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2967/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3363/comments | https://api.github.com/repos/huggingface/datasets/issues/3363/events | https://github.com/huggingface/datasets/pull/3363 | 1,068,824,340 | PR_kwDODunzps4vRVCl | 3,363 | Update URL of Jeopardy! dataset | [] | closed | false | null | 2 | 2021-12-01T20:08:10Z | 2022-10-06T13:45:49Z | 2021-12-03T12:35:01Z | null | Updates the URL of the Jeopardy! dataset.
Fix #3361 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3363/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3363.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3363",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3363.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3363"
} | true | [
"Closing this PR in favor of #3266.",
"I think you should also close this branch"
] |
https://api.github.com/repos/huggingface/datasets/issues/2202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2202/comments | https://api.github.com/repos/huggingface/datasets/issues/2202/events | https://github.com/huggingface/datasets/pull/2202 | 854,501,109 | MDExOlB1bGxSZXF1ZXN0NjEyNDM2ODMx | 2,202 | Add classes GenerateMode, DownloadConfig and Version to the documentation | [] | closed | false | null | 0 | 2021-04-09T12:58:19Z | 2021-04-12T17:58:00Z | 2021-04-12T17:57:59Z | null | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2202/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2202.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2202",
"merged_at": "2021-04-12T17:57:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2202.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2202"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3903/comments | https://api.github.com/repos/huggingface/datasets/issues/3903/events | https://github.com/huggingface/datasets/pull/3903 | 1,167,521,627 | PR_kwDODunzps40WnkI | 3,903 | Add Biwi Kinect Head Pose dataset. | [] | closed | false | null | 17 | 2022-03-13T08:59:21Z | 2022-05-31T17:02:19Z | 2022-05-31T12:15:58Z | null | This PR adds the Biwi Kinect Head Pose dataset.
Dataset Request : Add Biwi Kinect Head Pose Database [#3822](https://github.com/huggingface/datasets/issues/3822)
The Biwi Kinect Head Pose Database is acquired with the Microsoft Kinect sensor, a structured IR light device.It contains 15K images of 20 people with 6 females and 14 males where 4 people were recorded twice.
For each frame, there is :
- a depth image, (.bin file)
- a corresponding rgb image (both 640x480 pixels),
- annotation ( present inside a .txt file)
The ground truth is the 3D location of the head and its rotation.
The dataset structure is as follows :
```
- 01.obj
- 01
- frame_00003_depth.bin
- frame_00003_pose.txt
- frame_00003_rgb.png
.
.
.
- 02.obj
- 02
- frame_00003_depth.bin
- frame_00003_pose.txt
- frame_00003_rgb.png
.
.
.
```
Preview of frame_00003_pose.txt :
```
0.988397 0.0731349 0.133128
-0.0441539 0.976945 -0.208876
-0.145334 0.200575 0.968838
126.665 40.4515 876.198
```
I have used the following dataset features :
```
features=datasets.Features(
{
"person_id": datasets.Value("string"),
"frame_number": datasets.Value("string"),
"depth_image": datasets.Value("string"),
"rgb_image": datasets.Image(),
"3D_head_center": datasets.Array2D(shape=(3, 3), dtype="float"),
"3D_head_rotation": datasets.Value("float"),
}
```
I am giving the path to the depth_image here.
I need some inputs for the following :
1. For each person, the dataset has the following additional information :
```
For each sequence, the corresponding .obj file represents a head template deformed to match the neutral face of that specific person. [*.obj file]
In each folder, two .cal files contain calibration information for the depth and the color camera, e.g., the intrinsic camera matrix of the depth camera and the global rotation and translation to the rgb camera.
```
Wanted to know how we can represent these features ?
2. For _generate_examples , do I parse the directories and fetch the required information ? This would mean reading the .txt file to obtain the "3D_head_center" and "3D_head_rotation" details. We could precompute the features information and have a metadata file and use the metadata file to yield information in _generate_examples ? Wanted your thoughts for the best approach for this ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3903/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3903/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3903.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3903",
"merged_at": "2022-05-31T12:15:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3903.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3903"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the detailed explanation of the structure!\r\n\r\n1. IMO it makes the most sense to yield one example for each person (so the total of 24 examples), so the features dict should be similar to this:\r\n \r\n ```python\r\n ... |
https://api.github.com/repos/huggingface/datasets/issues/3945 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3945/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3945/comments | https://api.github.com/repos/huggingface/datasets/issues/3945/events | https://github.com/huggingface/datasets/pull/3945 | 1,171,222,257 | PR_kwDODunzps40ixmc | 3,945 | Fix comet metric | [] | closed | false | null | 4 | 2022-03-16T15:56:47Z | 2022-03-22T15:10:12Z | 2022-03-22T15:05:30Z | null | The COMET metric has been broken for a while since big breaking changes happened. We did not catch them in the CI because the slow test mocks the download_model function that was changed.
This PR fixes the metric, updates the download_model mock and updates the doctest. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3945/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3945/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3945.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3945",
"merged_at": "2022-03-22T15:05:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3945.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3945"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Finally I'm done updating the dependencies ^^'\r\n\r\ncc @sashavor can you review my changes in the metric card please ?",
"Looks good to me! Just fixed a tiny typo :wink: ",
"Thanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/1949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1949/comments | https://api.github.com/repos/huggingface/datasets/issues/1949/events | https://github.com/huggingface/datasets/issues/1949 | 816,986,936 | MDU6SXNzdWU4MTY5ODY5MzY= | 1,949 | Enable Fast Filtering using Arrow Dataset | [] | open | false | null | 2 | 2021-02-26T02:53:37Z | 2021-02-26T19:18:29Z | null | null | Hi @lhoestq,
As mentioned in Issue #1796, I would love to work on enabling fast filtering/mapping. Can you please share the expectations? It would be great if you could point me to the relevant methods/files involved. Or the docs or maybe an overview of `arrow_dataset.py`. I only ask this because I am having trouble getting started ;-;
Any help would be appreciated.
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1949/timeline | null | null | null | null | false | [
"Hi @gchhablani :)\r\nThanks for proposing your help !\r\n\r\nI'll be doing a refactor of some parts related to filtering in the scope of https://github.com/huggingface/datasets/issues/1877\r\nSo I would first wait for this refactor to be done before working on the filtering. In particular because I plan to make th... |
https://api.github.com/repos/huggingface/datasets/issues/5393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5393/comments | https://api.github.com/repos/huggingface/datasets/issues/5393/events | https://github.com/huggingface/datasets/pull/5393 | 1,512,908,613 | PR_kwDODunzps5GTg0a | 5,393 | Finish deprecating the fs argument | [] | closed | false | null | 6 | 2022-12-28T15:33:17Z | 2023-01-18T12:42:33Z | 2023-01-18T12:35:32Z | null | See #5385 for some discussion on this
The `fs=` arg was depcrecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in `2.8.0` (to be removed in `3.0.0`). There are a few other places where the `fs=` arg was still used (functions/methods in `datasets.info` and `datasets.load`). This PR adds a similar behavior, warnings and the `storage_options=` arg to these functions and methods.
One question: should the "deprecated" / "added" versions be `2.8.1` for the docs/warnings on these? Right now I'm going with "fs was deprecated in 2.8.0" but "storage_options= was added in 2.8.1" where appropriate.
@mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5393/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5393/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5393.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5393",
"merged_at": "2023-01-18T12:35:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5393.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5393"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the deprecation. Some minor suggested fixes below...\r\n> \r\n> Also note that the corresponding tests should be updated as well.\r\n\r\nThanks for the suggestions/typo fixes. I updated the failing test - passing locall... |
https://api.github.com/repos/huggingface/datasets/issues/6090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6090/comments | https://api.github.com/repos/huggingface/datasets/issues/6090/events | https://github.com/huggingface/datasets/issues/6090 | 1,825,865,043 | I_kwDODunzps5s1H1T | 6,090 | FilesIterable skips all the files after a hidden file | [] | open | false | null | 0 | 2023-07-28T07:25:57Z | 2023-07-28T07:25:57Z | null | null | ### Describe the bug
When initializing `FilesIterable` with a list of file paths using `FilesIterable.from_paths`, it will discard all the files after a hidden file.
The problem is in [this line](https://github.com/huggingface/datasets/blob/88896a7b28610ace95e444b94f9a4bc332cc1ee3/src/datasets/download/download_manager.py#L233C26-L233C26) where `return` should be replaced by `continue`.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1SQlxs4y_LSo1Q89KnFoYDSyyKEISun_J#scrollTo=93K4_blkW-8-
### Expected behavior
The script should print all the files except the hidden one.
### Environment info
- `datasets` version: 2.14.1
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.16.4
- PyArrow version: 9.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6090/timeline | null | null | null | null | false | [
"Thanks for reporting. We've merged a PR with a fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/6010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6010/comments | https://api.github.com/repos/huggingface/datasets/issues/6010/events | https://github.com/huggingface/datasets/issues/6010 | 1,793,838,152 | I_kwDODunzps5q68xI | 6,010 | Improve `Dataset`'s string representation | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2023-07-07T16:38:03Z | 2023-07-16T13:00:18Z | null | null | Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6010/timeline | null | null | null | null | false | [
"I want to take a shot at this if possible ",
"Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`/`_repr_html_` implementations for some pointers/ideas."
] |
https://api.github.com/repos/huggingface/datasets/issues/1049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1049/comments | https://api.github.com/repos/huggingface/datasets/issues/1049/events | https://github.com/huggingface/datasets/pull/1049 | 756,157,602 | MDExOlB1bGxSZXF1ZXN0NTMxNzQ3NDY0 | 1,049 | Add siswati ner corpus | [] | closed | false | null | 0 | 2020-12-03T12:36:00Z | 2020-12-03T17:27:02Z | 2020-12-03T17:26:55Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1049/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1049.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1049",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1049.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1049"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/4325 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4325/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4325/comments | https://api.github.com/repos/huggingface/datasets/issues/4325/events | https://github.com/huggingface/datasets/issues/4325 | 1,233,812,191 | I_kwDODunzps5Jinrf | 4,325 | Dataset Viewer issue for strombergnlp/offenseval_2020, strombergnlp/polstance | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 4 | 2022-05-12T10:59:08Z | 2022-05-13T10:57:15Z | 2022-05-13T10:57:02Z | null | ### Link
https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
### Description
The viewer isn't running for these two datasets. I left it overnight because a wait sometimes helps things get loaded, and the error messages have all gone, but the datasets are still turning up blank in viewer. Maybe it needs a bit more time.
* https://huggingface.co/datasets/strombergnlp/polstance/viewer/PolStance/train
* https://huggingface.co/datasets/strombergnlp/offenseval_2020/viewer/ar/train
While offenseval_2020 is gated w. prompt, the other gated previews I have run fine in Viewer, e.g. https://huggingface.co/datasets/strombergnlp/shaj , so I'm a bit stumped!
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4325/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4325/timeline | null | completed | null | null | false | [
"Not sure if it's related... I was going to raise an issue for https://huggingface.co/datasets/domenicrosati/TruthfulQA which also has the same issue... https://huggingface.co/datasets/domenicrosati/TruthfulQA/viewer/domenicrosati--TruthfulQA/train \r\n\r\n",
"Yes, it's related. The backend behind the dataset vie... |
https://api.github.com/repos/huggingface/datasets/issues/4864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4864/comments | https://api.github.com/repos/huggingface/datasets/issues/4864/events | https://github.com/huggingface/datasets/issues/4864 | 1,344,410,043 | I_kwDODunzps5QIhG7 | 4,864 | Allow pathlib PoxisPath in Dataset.read_json | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 6 | 2022-08-19T12:59:17Z | 2023-03-12T11:25:49Z | null | null | **Is your feature request related to a problem? Please describe.**
```
from pathlib import Path
from datasets import Dataset
ds = Dataset.read_json(Path('data.json'))
```
causes an error
```
AttributeError: 'PosixPath' object has no attribute 'decode'
```
**Describe the solution you'd like**
It should be able to accept PosixPath and read the json from inside. | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4864/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4864/timeline | null | null | null | null | false | [
"This same error will occur using `ds = datasets.load_dataset('json', data_files=['test.jsonl'])`",
"@cccntu I want to make a quick fix for this, but I am struggling to find where the json dataset builder is. Do you know?",
"@vvvm23 I think you mean think:\r\n```python\r\nds = datasets.load_dataset('json', data... |
https://api.github.com/repos/huggingface/datasets/issues/4987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4987/comments | https://api.github.com/repos/huggingface/datasets/issues/4987/events | https://github.com/huggingface/datasets/pull/4987 | 1,376,006,477 | PR_kwDODunzps4_GlIu | 4,987 | Embed image/audio data in dl_and_prepare parquet | [] | closed | false | null | 1 | 2022-09-16T14:09:27Z | 2022-09-16T16:24:47Z | 2022-09-16T16:22:35Z | null | Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file.
Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4987/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4987",
"merged_at": "2022-09-16T16:22:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4987"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3921/comments | https://api.github.com/repos/huggingface/datasets/issues/3921/events | https://github.com/huggingface/datasets/pull/3921 | 1,169,749,338 | PR_kwDODunzps40d4Mk | 3,921 | Fix NonMatchingChecksumError in CRD3 dataset | [] | closed | false | null | 2 | 2022-03-15T14:27:14Z | 2022-03-15T15:54:27Z | 2022-03-15T15:54:26Z | null | Fix #3051 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3921/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3921.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3921",
"merged_at": "2022-03-15T15:54:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3921.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3921"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3921). All of your documentation changes will be reflected on that endpoint.",
"Unrelated test failure. This PR can be merged."
] |
https://api.github.com/repos/huggingface/datasets/issues/2409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2409/comments | https://api.github.com/repos/huggingface/datasets/issues/2409/events | https://github.com/huggingface/datasets/pull/2409 | 903,441,398 | MDExOlB1bGxSZXF1ZXN0NjU0Njk3NjA0 | 2,409 | Add HF_ prefix to env var MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | [] | closed | false | null | 14 | 2021-05-27T09:07:00Z | 2021-06-08T16:00:55Z | 2021-05-27T09:33:41Z | null | As mentioned in https://github.com/huggingface/datasets/pull/2399 the env var should be prefixed by HF_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2409/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2409/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2409.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2409",
"merged_at": "2021-05-27T09:33:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2409.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2409"
} | true | [
"I thought the renaming was suggested only for the env var, and not for the config variable... As you think is better! ;)",
"I think it's better if they match, so that users understand directly that they're directly connected",
"Well, if you're not concerned about back-compat here, perhaps it could be renamed a... |
https://api.github.com/repos/huggingface/datasets/issues/194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/194/comments | https://api.github.com/repos/huggingface/datasets/issues/194/events | https://github.com/huggingface/datasets/pull/194 | 624,854,897 | MDExOlB1bGxSZXF1ZXN0NDIzMTgyNDM5 | 194 | Add Dataset: Qanta | [] | closed | false | null | 3 | 2020-05-26T12:44:35Z | 2020-05-26T16:58:17Z | 2020-05-26T13:16:20Z | null | Fixes dummy data for #169 @EntilZha | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/194/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/194",
"merged_at": "2020-05-26T13:16:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/194"
} | true | [
"@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.",
"It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `/` etc.\r\n\r\nI'll ad... |
https://api.github.com/repos/huggingface/datasets/issues/1230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1230/comments | https://api.github.com/repos/huggingface/datasets/issues/1230/events | https://github.com/huggingface/datasets/pull/1230 | 758,119,342 | MDExOlB1bGxSZXF1ZXN0NTMzMzQxNTg0 | 1,230 | Add Urdu fake news dataset | [] | closed | false | null | 1 | 2020-12-07T03:19:50Z | 2020-12-07T18:04:55Z | 2020-12-07T16:57:54Z | null | @lhoestq opened a clean PR containing only relevant files.
old PR #1125 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1230/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1230.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1230",
"merged_at": "2020-12-07T16:57:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1230.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1230"
} | true | [
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3806 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3806/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3806/comments | https://api.github.com/repos/huggingface/datasets/issues/3806/events | https://github.com/huggingface/datasets/pull/3806 | 1,157,505,826 | PR_kwDODunzps4z2FeI | 3,806 | Fix Spanish data file URL in wiki_lingua dataset | [] | closed | false | null | 0 | 2022-03-02T17:43:42Z | 2022-03-03T08:38:17Z | 2022-03-03T08:38:16Z | null | This PR fixes the URL for Spanish data file.
Previously, Spanish had the same URL as Vietnamese data file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3806/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3806/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3806.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3806",
"merged_at": "2022-03-03T08:38:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3806.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3806"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6084/comments | https://api.github.com/repos/huggingface/datasets/issues/6084/events | https://github.com/huggingface/datasets/issues/6084 | 1,824,896,761 | I_kwDODunzps5sxbb5 | 6,084 | Changing pixel values of images in the Winoground dataset | [] | open | false | null | 0 | 2023-07-27T17:55:35Z | 2023-07-27T17:55:35Z | null | null | Hi, as I followed the instructions, with lasted "datasets" version:
"
from datasets import load_dataset
examples = load_dataset('facebook/winoground', use_auth_token=<YOUR USER ACCESS TOKEN>)
"
I got slightly different datasets in colab and in my hpc environment. Specifically, the pixel values of images are slightly different.
I thought it was due to the package version difference, but today's morning I found out that my winoground dataset in colab became the same with the one in my hpc environment. The dataset in colab can produce the correct result but now it is gone as well.
Can you help me with this? What causes the datasets to have the wrong pixel values? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6084/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/156/comments | https://api.github.com/repos/huggingface/datasets/issues/156/events | https://github.com/huggingface/datasets/issues/156 | 620,263,687 | MDU6SXNzdWU2MjAyNjM2ODc= | 156 | SyntaxError with WMT datasets | [] | closed | false | null | 7 | 2020-05-18T14:38:18Z | 2020-07-23T16:41:55Z | 2020-07-23T16:41:55Z | null | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-8-3206959998b9>", line 3, in <module>
dataset = nlp.load_dataset('wmt14')
File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 505, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/home/tom/.local/lib/python3.6/site-packages/nlp/load.py", line 56, in import_main_class
module = importlib.import_module(module_path)
File "/usr/lib/python3.6/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 994, in _gcd_import
File "<frozen importlib._bootstrap>", line 971, in _find_and_load
File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 665, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 678, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt14.py", line 21, in <module>
from .wmt_utils import Wmt, WmtConfig
File "/home/tom/.local/lib/python3.6/site-packages/nlp/datasets/wmt14/c258d646f4f5870b0245f783b7aa0af85c7117e06aacf1e0340bd81935094de2/wmt_utils.py", line 659
<<<<<<< HEAD
^
SyntaxError: invalid syntax
```
Python version:
`3.6.9 (default, Apr 18 2020, 01:56:04) [GCC 8.4.0]`
Running on Ubuntu 18.04, via a Jupyter notebook | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/156/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/156/timeline | null | completed | null | null | false | [
"Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !",
"Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError ... |
https://api.github.com/repos/huggingface/datasets/issues/568 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/568/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/568/comments | https://api.github.com/repos/huggingface/datasets/issues/568/events | https://github.com/huggingface/datasets/issues/568 | 691,638,656 | MDU6SXNzdWU2OTE2Mzg2NTY= | 568 | `metric.compute` throws `ArrowInvalid` error | [] | closed | false | null | 3 | 2020-09-03T04:56:57Z | 2020-10-05T16:33:53Z | 2020-10-05T16:33:53Z | null | I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_str, rouge_types=['rouge2', 'rouge1', 'rougeL'])
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 224, in compute
self.finalize(timeout=timeout)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/metric.py", line 213, in finalize
self.data = Dataset(**reader.read_files(node_files))
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 217, in read_files
dataset_kwargs = self._read_files(files=files, info=self._info, original_instructions=original_instructions)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 162, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/nlp/arrow_reader.py", line 276, in _get_dataset_from_filename
f = pa.ipc.open_stream(mmap)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 173, in open_stream
return RecordBatchStreamReader(source)
File "/home/beltagy/miniconda3/envs/allennlp/lib/python3.7/site-packages/pyarrow/ipc.py", line 64, in __init__
self._open(source)
File "pyarrow/ipc.pxi", line 469, in pyarrow.lib._RecordBatchStreamReader._open
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Tried reading schema message, was null or length 0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/568/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/568/timeline | null | completed | null | null | false | [
"Hmm might be related to what we are solving in #564",
"Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ",
"Closin... |
https://api.github.com/repos/huggingface/datasets/issues/3311 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3311/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3311/comments | https://api.github.com/repos/huggingface/datasets/issues/3311/events | https://github.com/huggingface/datasets/issues/3311 | 1,060,387,957 | I_kwDODunzps4_NDx1 | 3,311 | Add WebSRC | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2021-11-22T16:58:33Z | 2021-11-22T16:58:33Z | null | null | ## Adding a Dataset
- **Name:** WebSRC
- **Description:** WebSRC is a novel Web-based Structural Reading Comprehension dataset. It consists of 0.44M question-answer pairs, which are collected from 6.5K web pages with corresponding HTML source code, screenshots and metadata.
- **Paper:** https://arxiv.org/abs/2101.09465
- **Data:** https://x-lance.github.io/WebSRC/dashboard.html#
- **Motivation:** Currently adding MarkupLM to HuggingFace Transformers, which achieves SOTA on this dataset.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3311/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3311/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5602 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5602/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5602/comments | https://api.github.com/repos/huggingface/datasets/issues/5602/events | https://github.com/huggingface/datasets/pull/5602 | 1,607,054,110 | PR_kwDODunzps5LJGfa | 5,602 | Return dict structure if columns are lists - to_tf_dataset | [] | open | false | null | 19 | 2023-03-02T15:51:12Z | 2023-04-12T15:54:53Z | null | null | This PR introduces new logic to `to_tf_dataset` affecting the returned data structure, enabling a dictionary structure to be returned, even if only one feature column is selected.
If the passed in `columns` or `label_cols` to `to_tf_dataset` are a list, they are returned as a dictionary, respectively. If they are a string, the tensor is returned.
An outline of the behaviour:
```
dataset,to_tf_dataset(columns=["col_1"], label_cols="col_2")
# ({'col_1': col_1}, col_2}
dataset,to_tf_dataset(columns="col1", label_cols="col_2")
# (col1, col2)
dataset,to_tf_dataset(columns="col1")
# col1
dataset,to_tf_dataset(columns=["col_1"], labels=["col_2"])
# ({'col1': tensor}, {'col2': tensor}}
dataset,to_tf_dataset(columns="col_1", labels=["col_2"])
# (col1, {'col2': tensor}}
```
## Motivation
Currently, when calling `to_tf_dataset`, the returned dataset will always return a tuple structure if a single feature column is used. This can cause issues when calling `model.fit` on models which train without labels e.g. [TFVitMAEForPreTraining](https://github.com/huggingface/transformers/blob/b6f47b539377ac1fd845c7adb4ccaa5eb514e126/src/transformers/models/vit_mae/modeling_vit_mae.py#L849). Specifically, [this line](https://github.com/huggingface/transformers/blob/d9e28d91a8b2d09b51a33155d3a03ad9fcfcbd1f/src/transformers/modeling_tf_utils.py#L1521) where it's assumed the input `x` is a dictionary if there is no label.
## Example
Previous behaviour
```python
In [1]: import tensorflow as tf
...: from datasets import load_dataset
...:
...:
...: def transform(batch):
...: def _transform_img(img):
...: img = img.convert("RGB")
...: img = tf.keras.utils.img_to_array(img)
...: img = tf.image.resize(img, (224, 224))
...: img /= 255.0
...: img = tf.transpose(img, perm=[2, 0, 1])
...: return img
...: batch['pixel_values'] = [_transform_img(pil_img) for pil_img in batch['img']]
...: return batch
...:
...:
...: def collate_fn(examples):
...: pixel_values = tf.stack([example["pixel_values"] for example in examples])
...: return {"pixel_values": pixel_values}
...:
...:
...: dataset = load_dataset('cifar10')['train']
...: dataset = dataset.with_transform(transform)
...: dataset.to_tf_dataset(batch_size=8, columns=['pixel_values'], collate_fn=collate_fn)
Out[1]: <PrefetchDataset element_spec=TensorSpec(shape=(None, 3, 224, 224), dtype=tf.float32, name=None)>
```
New behaviour
```python
In [1]: import tensorflow as tf
...: from datasets import load_dataset
...:
...:
...: def transform(batch):
...: def _transform_img(img):
...: img = img.convert("RGB")
...: img = tf.keras.utils.img_to_array(img)
...: img = tf.image.resize(img, (224, 224))
...: img /= 255.0
...: img = tf.transpose(img, perm=[2, 0, 1])
...: return img
...: batch['pixel_values'] = [_transform_img(pil_img) for pil_img in batch['img']]
...: return batch
...:
...:
...: def collate_fn(examples):
...: pixel_values = tf.stack([example["pixel_values"] for example in examples])
...: return {"pixel_values": pixel_values}
...:
...:
...: dataset = load_dataset('cifar10')['train']
...: dataset = dataset.with_transform(transform)
...: dataset.to_tf_dataset(batch_size=8, columns=['pixel_values'], collate_fn=collate_fn)
Out[1]: <PrefetchDataset element_spec={'pixel_values': TensorSpec(shape=(None, 3, 224, 224), dtype=tf.float32, name=None)}>
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5602/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5602/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5602.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5602",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5602.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5602"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5602). All of your documentation changes will be reflected on that endpoint.",
"This is a great PR! Thinking about the UX though, maybe we could do it without the extra argument? Before this PR, the logic in `to_tf_dataset` was... |
https://api.github.com/repos/huggingface/datasets/issues/4098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4098/comments | https://api.github.com/repos/huggingface/datasets/issues/4098/events | https://github.com/huggingface/datasets/pull/4098 | 1,193,245,522 | PR_kwDODunzps41qXjo | 4,098 | Proposing WikiSplit metric card | [] | closed | false | null | 3 | 2022-04-05T14:36:34Z | 2022-10-11T09:10:21Z | 2022-04-05T15:42:28Z | null | Pinging @lhoestq to ensure that my distinction between the dataset and the metric are clear :sweat_smile: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4098/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4098.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4098",
"merged_at": "2022-04-05T15:42:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4098.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4098"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"A quick Github tip ;) To avoid running N times the CI, you can push all the changes at once: go to Files Changed tab, and on each suggestion there's a \"add to commit batch\" and then you can do one commit for all the suggestions you... |
https://api.github.com/repos/huggingface/datasets/issues/3743 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3743/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3743/comments | https://api.github.com/repos/huggingface/datasets/issues/3743/events | https://github.com/huggingface/datasets/pull/3743 | 1,141,176,011 | PR_kwDODunzps4y-2Do | 3,743 | initial monash time series forecasting repository | [] | closed | false | null | 3 | 2022-02-17T10:51:31Z | 2022-03-21T09:54:41Z | 2022-03-21T09:50:16Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3743/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3743/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3743.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3743",
"merged_at": "2022-03-21T09:50:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3743.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3743"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI fails are unrelated to this PR, merging !",
"thanks 🙇🏽 "
] |
https://api.github.com/repos/huggingface/datasets/issues/5781 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5781/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5781/comments | https://api.github.com/repos/huggingface/datasets/issues/5781/events | https://github.com/huggingface/datasets/issues/5781 | 1,679,580,460 | I_kwDODunzps5kHF0s | 5,781 | Error using `load_datasets` | [] | closed | false | null | 2 | 2023-04-22T15:10:44Z | 2023-05-02T23:41:25Z | 2023-05-02T23:41:25Z | null | ### Describe the bug
I tried to load a dataset using the `datasets` library in a conda jupyter notebook and got the below error.
```
ImportError: dlopen(/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so, 0x0002): Library not loaded: @rpath/liblapack.3.dylib
Referenced from: <65B094A2-59D7-31AC-A966-4DB9E11D2A15> /Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/_iterative.cpython-38-darwin.so
Reason: tried: '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/lib/python3.8/site-packages/scipy/sparse/linalg/_isolve/../../../../../../liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/Users/gilbertyoung/miniforge3/envs/review_sense/bin/../lib/liblapack.3.dylib' (no such file), '/usr/local/lib/liblapack.3.dylib' (no such file), '/usr/lib/liblapack.3.dylib' (no such file, not in dyld cache)
```
### Steps to reproduce the bug
Run the `load_datasets` function
### Expected behavior
I expected the dataset to be loaded into my notebook.
### Environment info
name: review_sense
channels:
- apple
- conda-forge
dependencies:
- python=3.8
- pip>=19.0
- jupyter
- tensorflow-deps
#- scikit-learn
#- scipy
- pandas
- pandas-datareader
- matplotlib
- pillow
- tqdm
- requests
- h5py
- pyyaml
- flask
- boto3
- ipykernel
- seaborn
- pip:
- tensorflow-macos==2.9
- tensorflow-metal==0.5.0
- bayesian-optimization
- gym
- kaggle
- huggingface_hub
- datasets
- numpy
- huggingface
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5781/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5781/timeline | null | completed | null | null | false | [
"It looks like an issue with your installation of scipy, can you try reinstalling it ?",
"Sorry for the late reply, but that worked @lhoestq . Thanks for the assist."
] |
https://api.github.com/repos/huggingface/datasets/issues/2454 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2454/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2454/comments | https://api.github.com/repos/huggingface/datasets/issues/2454/events | https://github.com/huggingface/datasets/pull/2454 | 913,883,631 | MDExOlB1bGxSZXF1ZXN0NjYzODUyODU1 | 2,454 | Rename config and environment variable for in memory max size | [] | closed | false | null | 1 | 2021-06-07T19:21:08Z | 2021-06-07T20:43:46Z | 2021-06-07T20:43:46Z | null | As discussed in #2409, both config and environment variable have been renamed.
cc: @stas00, huggingface/transformers#12056 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2454/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2454/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2454.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2454",
"merged_at": "2021-06-07T20:43:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2454.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2454"
} | true | [
"Thank you for the rename, @albertvillanova!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3181/comments | https://api.github.com/repos/huggingface/datasets/issues/3181/events | https://github.com/huggingface/datasets/issues/3181 | 1,039,682,097 | I_kwDODunzps49-Eox | 3,181 | `None` converted to `"None"` when loading a dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 9 | 2021-10-29T15:23:53Z | 2021-12-11T01:16:40Z | 2021-12-09T14:26:57Z | null | ## Describe the bug
When loading a dataset `None` values of the type `NoneType` are converted to `'None'` of the type `str`.
## Steps to reproduce the bug
```python
from datasets import load_dataset
qasper = load_dataset("qasper", split="train", download_mode="reuse_cache_if_exists")
print(qasper[60]["full_text"]["section_name"])
```
When installing version 1.1.40, the output is
`[None, 'Introduction', 'Benchmark Datasets', ...]`
When installing from the master branch, the output is
`['None', 'Introduction', 'Benchmark Datasets', ...]`
Notice how the first element was changed from `NoneType` to `str`.
## Expected results
`None` should stay as is.
## Actual results
`None` is converted to a string.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: master
- Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3181/timeline | null | completed | null | null | false | [
"Hi @eladsegal, thanks for reporting.\r\n\r\n@mariosasko I saw you are already working on this, but maybe my comment will be useful to you.\r\n\r\nAll values are casted to their corresponding feature type (including `None` values). For example if the feature type is `Value(\"bool\")`, `None` is casted to `False`.\r... |
https://api.github.com/repos/huggingface/datasets/issues/5916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5916/comments | https://api.github.com/repos/huggingface/datasets/issues/5916/events | https://github.com/huggingface/datasets/pull/5916 | 1,732,456,392 | PR_kwDODunzps5RskTb | 5,916 | Unpin responses | [] | closed | false | null | 4 | 2023-05-30T14:59:48Z | 2023-05-30T18:03:10Z | 2023-05-30T17:53:29Z | null | Fix #5906 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5916/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5916/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5916.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5916",
"merged_at": "2023-05-30T17:53:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5916.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5916"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5475/comments | https://api.github.com/repos/huggingface/datasets/issues/5475/events | https://github.com/huggingface/datasets/issues/5475 | 1,559,030,149 | I_kwDODunzps5c7OmF | 5,475 | Dataset scan time is much slower than using native arrow | [] | closed | false | null | 3 | 2023-01-27T01:32:25Z | 2023-01-30T16:17:11Z | 2023-01-30T16:17:11Z | null | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that explains this phenomenon?
### Steps to reproduce the bug
https://colab.research.google.com/drive/11EtHDaGAf1DKCpvYnAPJUW-LFfAcDzHY?usp=sharing
### Expected behavior
I expect scan times to be on par with using pyarrow directly.
### Environment info
standard colab environment | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5475/timeline | null | completed | null | null | false | [
"Hi ! In your code you only iterate on the Arrow buffers - you don't actually load the data as python objects. For a fair comparison, you can modify your code using:\r\n```diff\r\n- for _ in range(0, len(table), bsz):\r\n- _ = {k:table[k][_ : _ + bsz] for k in cols}\r\n+ for _ in range(0, len(table)... |
https://api.github.com/repos/huggingface/datasets/issues/5234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5234/comments | https://api.github.com/repos/huggingface/datasets/issues/5234/events | https://github.com/huggingface/datasets/pull/5234 | 1,447,999,062 | PR_kwDODunzps5C1diq | 5,234 | fix: dataset path should be absolute | [] | closed | false | null | 3 | 2022-11-14T12:47:40Z | 2022-12-07T23:49:22Z | 2022-12-07T23:46:34Z | null | cache_file_name depends on dataset's path.
A simple way where this could cause a problem:
```
import os
import datasets
def add_prefix(example):
example["text"] = "Review: " + example["text"]
return example
ds = datasets.load_from_disk("a/relative/path")
os.chdir("/tmp")
ds_1 = ds.map(add_prefix)
```
while it may feel that the `chdir` is quite constructed, there are many scenarios when the current working dir can/will change... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5234/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5234",
"merged_at": "2022-12-07T23:46:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5234"
} | true | [
"Good catch thanks ! Have you tried to use the absolue path in `MemoryMappedTable.__init__` in `table.py`?\r\n\r\nI think it can fix issues with relative paths at more levels than just fixing it `load_from_disk`. If it works I think it would be a more robust fix to this issue",
"@lhoestq right, that actually fixe... |
https://api.github.com/repos/huggingface/datasets/issues/5772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5772/comments | https://api.github.com/repos/huggingface/datasets/issues/5772/events | https://github.com/huggingface/datasets/pull/5772 | 1,675,033,510 | PR_kwDODunzps5OreXV | 5,772 | Fix JSON builder when missing keys in first row | [] | closed | false | null | 2 | 2023-04-19T14:32:57Z | 2023-04-21T06:45:13Z | 2023-04-21T06:35:27Z | null | Until now, the JSON builder only considered the keys present in the first element of the list:
- Either explicitly: by passing index 0 in `dataset[0].keys()`
- Or implicitly: `pa.Table.from_pylist(dataset)`, where "schema (default None): If not passed, will be inferred from the first row of the mapping values"
This PR fixes the bug by considering the union of the keys present in all the rows.
Fix #5726. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5772/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5772.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5772",
"merged_at": "2023-04-21T06:35:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5772.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5772"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4928/comments | https://api.github.com/repos/huggingface/datasets/issues/4928/events | https://github.com/huggingface/datasets/pull/4928 | 1,360,941,172 | PR_kwDODunzps4-Ubi4 | 4,928 | Add ability to read-write to SQL databases. | [] | closed | false | null | 14 | 2022-09-03T19:09:08Z | 2022-10-03T16:34:36Z | 2022-10-03T16:32:28Z | null | Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541f | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 4,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 8,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4928/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"merged_at": "2022-10-03T16:32:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4928"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.",
"wow this is super cool!",
"@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r... |
https://api.github.com/repos/huggingface/datasets/issues/5098 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5098/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5098/comments | https://api.github.com/repos/huggingface/datasets/issues/5098/events | https://github.com/huggingface/datasets/issues/5098 | 1,404,058,518 | I_kwDODunzps5TsDuW | 5,098 | Classes label error when loading symbolic links using imagefolder | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | closed | false | null | 3 | 2022-10-11T06:10:58Z | 2022-11-14T14:40:20Z | 2022-11-14T14:40:20Z | null | **Is your feature request related to a problem? Please describe.**
Like this: #4015
When there are **symbolic links** to pictures in the data folder, the parent folder name of the **real file** will be used as the class name instead of the parent folder of the symbolic link itself. Can you give an option to decide whether to enable symbolic link tracking?
This is inconsistent with the `torchvision.datasets.ImageFolder` behavior.
For example:


It use `others` in green circle as class label but not `abnormal`, I wish `load_dataset` not use the real file parent as label.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context about the feature request here.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5098/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5098/timeline | null | completed | null | null | false | [
"It can be solved temporarily by remove `resolve` in \r\nhttps://github.com/huggingface/datasets/blob/bef23be3d9543b1ca2da87ab2f05070201044ddc/src/datasets/data_files.py#L278",
"Hi, thanks for reporting and suggesting a fix! We still need to account for `.`/`..` in the file path, so a more robust fix would be `P... |
https://api.github.com/repos/huggingface/datasets/issues/1807 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1807/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1807/comments | https://api.github.com/repos/huggingface/datasets/issues/1807/events | https://github.com/huggingface/datasets/pull/1807 | 798,823,591 | MDExOlB1bGxSZXF1ZXN0NTY1NTczNzU5 | 1,807 | Adding an aggregated dataset for the GEM benchmark | [] | closed | false | null | 1 | 2021-02-02T00:39:53Z | 2021-02-02T22:48:41Z | 2021-02-02T18:06:58Z | null | This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)
The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which are linked to in this dataset card.
cc @sebastianGehrmann
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1807/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1807/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1807.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1807",
"merged_at": "2021-02-02T18:06:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1807.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1807"
} | true | [
"Nice !"
] |
https://api.github.com/repos/huggingface/datasets/issues/6072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6072/comments | https://api.github.com/repos/huggingface/datasets/issues/6072/events | https://github.com/huggingface/datasets/pull/6072 | 1,822,123,560 | PR_kwDODunzps5WbWFN | 6,072 | Fix fsspec storage_options from load_dataset | [] | closed | false | null | 6 | 2023-07-26T10:44:23Z | 2023-07-27T12:51:51Z | 2023-07-27T12:42:57Z | null | close https://github.com/huggingface/datasets/issues/6071 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6072",
"merged_at": "2023-07-27T12:42:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6072"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5596 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5596/comments | https://api.github.com/repos/huggingface/datasets/issues/5596/events | https://github.com/huggingface/datasets/issues/5596 | 1,604,919,993 | I_kwDODunzps5fqSK5 | 5,596 | [TypeError: Couldn't cast array of type] Can only load a subset of the dataset | [] | closed | false | null | 4 | 2023-03-01T12:53:08Z | 2023-04-19T10:19:37Z | 2023-03-02T11:12:11Z | null | ### Describe the bug
I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error:
```
casted_values = _c(array.values, feature[0])
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wrapper
return func(array, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 2132, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
TypeError: Couldn't cast array of type
struct<type: string, action: string, datetime: timestamp[s], author: string, title: string, description: string, comment_id: int64, comment: string, labels: list<item: string>>
to
{'type': Value(dtype='string', id=None), 'action': Value(dtype='string', id=None), 'datetime': Value(dtype='timestamp[s]', id=None), 'author': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'comment_id': Value(dtype='int64', id=None), 'comment': Value(dtype='string', id=None)}
```
But I can succesfully load a subset of the dataset, for example this works:
```python
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train", data_files=[f"data/data-{x}.jsonl" for x in range(10)])
```
and `ds.features` returns:
```
{'repo': Value(dtype='string', id=None),
'org': Value(dtype='string', id=None),
'issue_id': Value(dtype='int64', id=None),
'issue_number': Value(dtype='int64', id=None),
'pull_request': {'user_login': Value(dtype='string', id=None),
'repo': Value(dtype='string', id=None),
'number': Value(dtype='int64', id=None)},
'events': [{'type': Value(dtype='string', id=None),
'action': Value(dtype='string', id=None),
'datetime': Value(dtype='timestamp[s]', id=None),
'author': Value(dtype='string', id=None),
'title': Value(dtype='string', id=None),
'description': Value(dtype='string', id=None),
'comment_id': Value(dtype='int64', id=None),
'comment': Value(dtype='string', id=None)}]}
```
So I'm not sure if there's an issue with just some of the files. Grateful if you have any suggestions to fix the issue.
Side note:
I saw this related [issue](https://github.com/huggingface/datasets/issues/3637) and tried to write a loading script to have `events` as a `Sequence` and not `list` [here](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/blob/main/loading.py) (the script was renamed). It worked with a subset locally but doesn't for the remote dataset it can't find https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues/resolve/main/data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
ds = load_dataset('bigcode-data/the-stack-gh-issues', split="train")
```
### Expected behavior
Load the entire dataset succesfully.
### Environment info
- `datasets` version: 2.10.1
- Platform: Linux-4.19.0-23-cloud-amd64-x86_64-with-debian-10.13
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5596/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5596/timeline | null | completed | null | null | false | [
"Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data",
"We've updated t... |
https://api.github.com/repos/huggingface/datasets/issues/1649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1649/comments | https://api.github.com/repos/huggingface/datasets/issues/1649/events | https://github.com/huggingface/datasets/pull/1649 | 775,544,487 | MDExOlB1bGxSZXF1ZXN0NTQ2MjAzMjE1 | 1,649 | Update README.md | [] | closed | false | null | 0 | 2020-12-28T19:05:00Z | 2020-12-29T10:50:58Z | 2020-12-29T10:43:03Z | null | Added information in the dataset card | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1649/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1649/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1649.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1649",
"merged_at": "2020-12-29T10:43:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1649.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1649"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2458 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2458/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2458/comments | https://api.github.com/repos/huggingface/datasets/issues/2458/events | https://github.com/huggingface/datasets/issues/2458 | 915,199,693 | MDU6SXNzdWU5MTUxOTk2OTM= | 2,458 | Revert default in-memory for small datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-06-08T18:51:04Z",
"closed_issues": 2,
"created_at": "2021-04-20T16:49:16Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-06-08T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/4",
"id": 6680642,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels",
"node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==",
"number": 4,
"open_issues": 0,
"state": "closed",
"title": "1.8",
"updated_at": "2021-06-08T18:51:37Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/4"
} | 1 | 2021-06-08T15:51:41Z | 2021-06-08T18:57:11Z | 2021-06-08T17:55:43Z | null | Users are reporting issues and confusion about setting default in-memory to True for small datasets.
We see 2 clear use cases of Datasets:
- the "canonical" way, where you can work with very large datasets, as they are memory-mapped and cached (after every transformation)
- some edge cases (speed benchmarks, interactive/exploratory analysis,...), where default in-memory can explicitly be enabled, and no caching will be done
After discussing with @lhoestq we have agreed to:
- revert this feature (implemented in #2182)
- explain in the docs how to optimize speed/performance by setting default in-memory
cc: @stas00 https://github.com/huggingface/datasets/pull/2409#issuecomment-856210552 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2458/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2458/timeline | null | completed | null | null | false | [
"cc: @krandiash (pinged in reverted PR)."
] |
https://api.github.com/repos/huggingface/datasets/issues/178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/178/comments | https://api.github.com/repos/huggingface/datasets/issues/178/events | https://github.com/huggingface/datasets/pull/178 | 621,979,849 | MDExOlB1bGxSZXF1ZXN0NDIwOTMyMDI5 | 178 | [Manual data] improve error message for manual data in general | [] | closed | false | null | 0 | 2020-05-20T18:10:45Z | 2020-05-20T18:18:52Z | 2020-05-20T18:18:50Z | null | `nlp.load("xsum")` now leads to the following error message:

I guess the manual download instructions for `xsum` can also be improved. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/178/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/178/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/178.diff",
"html_url": "https://github.com/huggingface/datasets/pull/178",
"merged_at": "2020-05-20T18:18:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/178.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/178"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1678/comments | https://api.github.com/repos/huggingface/datasets/issues/1678/events | https://github.com/huggingface/datasets/pull/1678 | 777,567,920 | MDExOlB1bGxSZXF1ZXN0NTQ3ODI4MTMy | 1,678 | Switchboard Dialog Act Corpus added under `datasets/swda` | [] | closed | false | null | 8 | 2021-01-03T03:53:41Z | 2021-01-08T18:09:21Z | 2021-01-05T10:06:35Z | null | Switchboard Dialog Act Corpus
Intro:
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,
with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information
about the associated turn. The SwDA project was undertaken at UC Boulder in the late 1990s.
Details:
[homepage](http://compprag.christopherpotts.net/swda.html)
[repo](https://github.com/NathanDuran/Switchboard-Corpus/raw/master/swda_data/)
I believe this is an important dataset to have since there is no dataset related to dialogue act added.
I didn't find any formatting for pull request. I hope all this information is enough.
For any support please contact me. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1678/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1678/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1678.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1678",
"merged_at": "2021-01-05T10:06:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1678.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1678"
} | true | [
"@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.",
"It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ",
"Hi @lhoestq,\r\nI'... |
https://api.github.com/repos/huggingface/datasets/issues/1917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1917/comments | https://api.github.com/repos/huggingface/datasets/issues/1917/events | https://github.com/huggingface/datasets/issues/1917 | 812,390,178 | MDU6SXNzdWU4MTIzOTAxNzg= | 1,917 | UnicodeDecodeError: windows 10 machine | [] | closed | false | null | 1 | 2021-02-19T22:13:05Z | 2021-02-19T22:41:11Z | 2021-02-19T22:40:28Z | null | Windows 10
Php 3.6.8
when running
```
import datasets
oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am")
print(oscar_am["train"][0])
```
I get the following error
```
file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined>
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1917/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1917/timeline | null | completed | null | null | false | [
"upgraded to php 3.9.2 and it works!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5071/comments | https://api.github.com/repos/huggingface/datasets/issues/5071/events | https://github.com/huggingface/datasets/pull/5071 | 1,397,301,270 | PR_kwDODunzps5AMG3g | 5,071 | Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS | [] | closed | false | null | 2 | 2022-10-05T06:28:39Z | 2022-10-06T14:43:12Z | 2022-10-06T14:40:26Z | null | This PR supports defining a default config name, even if no predefined allowed config names are set.
Fix #5070.
CC: @stas00 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5071/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5071.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5071",
"merged_at": "2022-10-06T14:40:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5071.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5071"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Super, thanks a lot for adding this support, Albert!"
] |
https://api.github.com/repos/huggingface/datasets/issues/5646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5646/comments | https://api.github.com/repos/huggingface/datasets/issues/5646/events | https://github.com/huggingface/datasets/pull/5646 | 1,627,838,762 | PR_kwDODunzps5MOqjj | 5,646 | Allow self as key in `Features` | [] | closed | false | null | 3 | 2023-03-16T16:17:03Z | 2023-03-16T17:21:58Z | 2023-03-16T17:14:50Z | null | Fix #5641 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5646/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5646/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5646.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5646",
"merged_at": "2023-03-16T17:14:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5646.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5646"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4232/comments | https://api.github.com/repos/huggingface/datasets/issues/4232/events | https://github.com/huggingface/datasets/pull/4232 | 1,216,659,444 | PR_kwDODunzps421qz4 | 4,232 | adding new tag to tasks.json and modified for existing datasets | [] | closed | false | null | 2 | 2022-04-27T01:21:09Z | 2022-05-03T14:23:56Z | 2022-05-03T14:16:39Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4232/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4232/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4232.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4232",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4232.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4232"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"closing in favor of https://github.com/huggingface/datasets/pull/4244"
] |
https://api.github.com/repos/huggingface/datasets/issues/5043 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5043/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5043/comments | https://api.github.com/repos/huggingface/datasets/issues/5043/events | https://github.com/huggingface/datasets/pull/5043 | 1,391,141,773 | PR_kwDODunzps4_3uzy | 5,043 | Fix `flatten_indices` with empty indices mapping | [] | closed | false | null | 1 | 2022-09-29T16:17:28Z | 2022-09-30T15:46:39Z | 2022-09-30T15:44:25Z | null | Fix #5038 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5043/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5043/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5043.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5043",
"merged_at": "2022-09-30T15:44:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5043.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5043"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4192/comments | https://api.github.com/repos/huggingface/datasets/issues/4192/events | https://github.com/huggingface/datasets/issues/4192 | 1,210,692,554 | I_kwDODunzps5IKbPK | 4,192 | load_dataset can't load local dataset,Unable to find ... | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-04-21T08:28:58Z | 2022-04-25T16:51:57Z | 2022-04-22T07:39:53Z | null |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwargs,
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1496, in load_dataset_builder
data_files=data_files,
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1155, in dataset_module_factory
download_mode=download_mode,
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 800, in get_module
data_files = DataFilesDict.from_local_or_remote(patterns, use_auth_token=self.downnload_config.use_auth_token)
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 582, in from_local_or_remote
if not isinstance(patterns_for_key, DataFilesList)
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 544, in from_local_or_remote
data_files = resolve_patterns_locally_or_by_urls(base_path, patterns, allowed_extensions)
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 194, in resolve_patterns_locally_or_by_urls
for path in _resolve_single_pattern_locally(base_path, pattern, allowed_extensions):
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/data_files.py", line 144, in _resolve_single_pattern_locally
raise FileNotFoundError(error_msg)
FileNotFoundError: Unable to find '/home/gs603/ahf/pretrained/dataset/dataset_infos.json' at /home/gs603/ahf/pretrained


the code is in the model.py,why I can't use the load_dataset function to load my local dataset? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4192/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4192/timeline | null | completed | null | null | false | [
"Hi! :)\r\n\r\nI believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json... |
https://api.github.com/repos/huggingface/datasets/issues/1532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1532/comments | https://api.github.com/repos/huggingface/datasets/issues/1532/events | https://github.com/huggingface/datasets/pull/1532 | 764,772,184 | MDExOlB1bGxSZXF1ZXN0NTM4NjgxODcz | 1,532 | adding hate-speech-and-offensive-language | [] | closed | false | null | 1 | 2020-12-13T02:16:31Z | 2020-12-17T18:36:54Z | 2020-12-17T18:10:05Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1532/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1532/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1532.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1532",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1532.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1532"
} | true | [
"made suggested changes and a new PR created here : https://github.com/huggingface/datasets/pull/1597"
] | |
https://api.github.com/repos/huggingface/datasets/issues/5627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5627/comments | https://api.github.com/repos/huggingface/datasets/issues/5627/events | https://github.com/huggingface/datasets/issues/5627 | 1,619,336,609 | I_kwDODunzps5ghR2h | 5,627 | Unable to load AutoTrain-generated dataset from the hub | [] | open | false | null | 2 | 2023-03-10T17:25:58Z | 2023-03-11T15:44:42Z | null | null | ### Describe the bug
DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match
```
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
```
### Steps to reproduce the bug
Steps to reproduce:
1. `pip install datasets==2.10.1`
2. Attempt to load (private dataset). Note that I'm authenticated via ` huggingface-cli login`
```
from datasets import load_dataset
# load dataset
dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
dataset = load_dataset(dataset)
```
Here's the full traceback:
```Downloading and preparing dataset json/ijmiller2--autotrain-data-betterbin-vision-10000 to /Users/ian/.cache/huggingface/datasets/ijmiller2___json/ijmiller2--autotrain-data-betterbin-vision-10000-2eae034a9ff8a1a9/0.0.0/0f7e3662623656454fcd2b650f34e886a7db4b9104504885bd462096cc7a9f51...
Downloading data files: 100%|███████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2383.80it/s]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 505.95it/s]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1874, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1868 writer = writer_class(
1869 features=writer._features,
1870 path=fpath.replace("SSSSS", f"{shard_id:05d}").replace("JJJJJ", f"{job_id:05d}"),
1871 storage_options=self._fs.storage_options,
1872 embed_local_files=embed_local_files,
1873 )
-> 1874 writer.write_table(table)
1875 num_examples_progress_update += len(table)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/arrow_writer.py:568, in ArrowWriter.write_table(self, pa_table, writer_batch_size)
567 pa_table = pa_table.combine_chunks()
--> 568 pa_table = table_cast(pa_table, self._schema)
569 if self.embed_local_files:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2312, in table_cast(table, schema)
2311 if table.schema != schema:
-> 2312 return cast_table_to_schema(table, schema)
2313 elif table.schema.metadata != schema.metadata:
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/table.py:2270, in cast_table_to_schema(table, schema)
2269 if sorted(table.column_names) != sorted(features):
-> 2270 raise ValueError(f"Couldn't cast\n{table.schema}\nto\n{features}\nbecause column names don't match")
2271 arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
ValueError: Couldn't cast
_data_files: list<item: struct<filename: string>>
child 0, item: struct<filename: string>
child 0, filename: string
_fingerprint: string
_format_columns: list<item: string>
child 0, item: string
_format_kwargs: struct<>
_format_type: null
_indexes: struct<>
_output_all_columns: bool
_split: null
to
{'citation': Value(dtype='string', id=None), 'description': Value(dtype='string', id=None), 'features': {'image': {'_type': Value(dtype='string', id=None)}, 'target': {'names': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), '_type': Value(dtype='string', id=None)}}, 'homepage': Value(dtype='string', id=None), 'license': Value(dtype='string', id=None), 'splits': {'train': {'name': Value(dtype='string', id=None), 'num_bytes': Value(dtype='int64', id=None), 'num_examples': Value(dtype='int64', id=None), 'dataset_name': Value(dtype='null', id=None)}}}
because column names don't match
The above exception was the direct cause of the following exception:
DatasetGenerationError Traceback (most recent call last)
Input In [8], in <cell line: 6>()
4 # load dataset
5 dataset = "ijmiller2/autotrain-data-betterbin-vision-10000"
----> 6 dataset = load_dataset(dataset)
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/load.py:1782, in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, verification_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, num_proc, **config_kwargs)
1779 try_from_hf_gcs = path not in _PACKAGED_DATASETS_MODULES
1781 # Download and prepare data
-> 1782 builder_instance.download_and_prepare(
1783 download_config=download_config,
1784 download_mode=download_mode,
1785 verification_mode=verification_mode,
1786 try_from_hf_gcs=try_from_hf_gcs,
1787 num_proc=num_proc,
1788 )
1790 # Build dataset for splits
1791 keep_in_memory = (
1792 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
1793 )
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:872, in DatasetBuilder.download_and_prepare(self, output_dir, download_config, download_mode, verification_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, file_format, max_shard_size, num_proc, storage_options, **download_and_prepare_kwargs)
870 if num_proc is not None:
871 prepare_split_kwargs["num_proc"] = num_proc
--> 872 self._download_and_prepare(
873 dl_manager=dl_manager,
874 verification_mode=verification_mode,
875 **prepare_split_kwargs,
876 **download_and_prepare_kwargs,
877 )
878 # Sync info
879 self.info.dataset_size = sum(split.num_bytes for split in self.info.splits.values())
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:967, in DatasetBuilder._download_and_prepare(self, dl_manager, verification_mode, **prepare_split_kwargs)
963 split_dict.add(split_generator.split_info)
965 try:
966 # Prepare split will record examples associated to the split
--> 967 self._prepare_split(split_generator, **prepare_split_kwargs)
968 except OSError as e:
969 raise OSError(
970 "Cannot find data file. "
971 + (self.manual_download_instructions or "")
972 + "\nOriginal error:\n"
973 + str(e)
974 ) from None
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1749, in ArrowBasedBuilder._prepare_split(self, split_generator, file_format, num_proc, max_shard_size)
1747 job_id = 0
1748 with pbar:
-> 1749 for job_id, done, content in self._prepare_split_single(
1750 gen_kwargs=gen_kwargs, job_id=job_id, **_prepare_split_args
1751 ):
1752 if done:
1753 result = content
File ~/anaconda3/envs/betterbin/lib/python3.8/site-packages/datasets/builder.py:1892, in ArrowBasedBuilder._prepare_split_single(self, gen_kwargs, fpath, file_format, max_shard_size, job_id)
1890 if isinstance(e, SchemaInferenceError) and e.__context__ is not None:
1891 e = e.__context__
-> 1892 raise DatasetGenerationError("An error occurred while generating the dataset") from e
1894 yield job_id, True, (total_num_examples, total_num_bytes, writer._features, num_shards, shard_lengths)
DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
I'm ultimately trying to generate my own performance metrics on validation data (before putting an endpoint into production) and so was hoping to load all or at least the validation subset from the hub.
I'm expecting the `load_dataset()` function to work as shown in the documentation [here](https://huggingface.co/docs/datasets/loading#hugging-face-hub):
```python
dataset = load_dataset(
"lhoestq/custom_squad",
revision="main" # tag name, or branch name, or commit hash
)
```
### Environment info
- `datasets` version: 2.10.1
- Platform: macOS-13.2.1-arm64-arm-64bit
- Python version: 3.8.13
- PyArrow version: 9.0.0
- Pandas version: 1.4.4 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5627/timeline | null | null | null | null | false | [
"The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder",
"Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I can’t find any way of\npulling out file indices or file names from the autogenerated... |
https://api.github.com/repos/huggingface/datasets/issues/3766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3766/comments | https://api.github.com/repos/huggingface/datasets/issues/3766/events | https://github.com/huggingface/datasets/pull/3766 | 1,145,829,289 | PR_kwDODunzps4zOujH | 3,766 | Fix head_qa data URL | [] | closed | false | null | 0 | 2022-02-21T13:52:50Z | 2022-02-21T14:39:20Z | 2022-02-21T14:39:19Z | null | Fix #3758. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3766/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3766.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3766",
"merged_at": "2022-02-21T14:39:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3766.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3766"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4473/comments | https://api.github.com/repos/huggingface/datasets/issues/4473/events | https://github.com/huggingface/datasets/pull/4473 | 1,267,555,994 | PR_kwDODunzps45d5-R | 4,473 | Add SST-2 dataset | [] | closed | false | null | 5 | 2022-06-10T13:37:26Z | 2022-06-13T14:11:34Z | 2022-06-13T14:01:09Z | null | Add SST-2 dataset.
Currently it is part of GLUE benchmark.
This PR adds it as a standalone dataset.
CC: @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4473/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4473",
"merged_at": "2022-06-13T14:01:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4473"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"on the hub this dataset is referenced as `sst-2` not `sst2` – is there a canonical orthography? If not, could we name it `sst-2`?",
"@julien-c, we normally do not use hyphens for dataset names: whenever the original dataset name co... |
https://api.github.com/repos/huggingface/datasets/issues/1199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1199/comments | https://api.github.com/repos/huggingface/datasets/issues/1199/events | https://github.com/huggingface/datasets/pull/1199 | 757,909,237 | MDExOlB1bGxSZXF1ZXN0NTMzMTg0Nzk3 | 1,199 | Turkish NER dataset, script works fine, couldn't generate dummy data | [] | closed | false | null | 2 | 2020-12-06T12:00:03Z | 2020-12-16T16:13:24Z | 2020-12-16T16:13:24Z | null | I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1199/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1199/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1199.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1199",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1199.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1199"
} | true | [
"the .DUMP file looks like a txt with one example per line so adding `--match_text_files *.DUMP --n_lines 50` to the dummy generation command might work .",
"We can close this PR since a new PR was open at #1268 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4803 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4803/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4803/comments | https://api.github.com/repos/huggingface/datasets/issues/4803/events | https://github.com/huggingface/datasets/issues/4803 | 1,332,079,562 | I_kwDODunzps5PZevK | 4,803 | Support `pipeline` argument in inspect.py functions | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2022-08-08T16:01:24Z | 2022-08-08T16:01:24Z | null | null | **Is your feature request related to a problem? Please describe.**
The `wikipedia` dataset requires a `pipeline` argument to build the list of splits:
https://huggingface.co/datasets/wikipedia/blob/main/wikipedia.py#L937
But this is currently not supported in `get_dataset_config_info`:
https://github.com/huggingface/datasets/blob/main/src/datasets/inspect.py#L373-L375
which is called by other functions, e.g. `get_dataset_split_names`.
**Additional context**
The dataset viewer is not working out-of-the-box on `wikipedia` for this reason:
https://huggingface.co/datasets/wikipedia/viewer
<img width="637" alt="Capture d’écran 2022-08-08 à 12 01 16" src="https://user-images.githubusercontent.com/1676121/183461838-5330783b-0269-4ba7-a999-314cde2023d8.png">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4803/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4803/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1780/comments | https://api.github.com/repos/huggingface/datasets/issues/1780/events | https://github.com/huggingface/datasets/pull/1780 | 793,882,132 | MDExOlB1bGxSZXF1ZXN0NTYxNDkxNTgy | 1,780 | Update SciFact URL | [] | closed | false | null | 7 | 2021-01-26T02:49:06Z | 2021-01-28T18:48:00Z | 2021-01-28T10:19:45Z | null | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/release/latest/data.tar.gz"`. I changed `scifact.py` appropriately and tried running
```
python datasets-cli test datasets/scifact --save_infos --all_configs
```
which I was hoping would update the `dataset_infos.json` for SciFact. But for some reason the code still seems to be looking for the old version of the dataset. Full stack trace below. I've tried to clear all my Huggingface-related caches, and I've `git grep`'d to make sure that the old path to the dataset isn't floating around somewhere. So I'm not sure why this is happening?
Can you help me switch the download URL?
```
(datasets) $ python datasets-cli test datasets/scifact --save_infos --all_configs
Checking datasets/scifact/scifact.py for additional imports.
Found main folder for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact
Found specific version folder for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534
Found script file from datasets/scifact/scifact.py to /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/scifact.py
Found dataset infos file from datasets/scifact/dataset_infos.json to /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/dataset_infos.json
Found metadata file for dataset datasets/scifact/scifact.py at /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534/scifact.json
Loading Dataset Infos from /Users/dwadden/.cache/huggingface/modules/datasets_modules/datasets/scifact/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534
Testing builder 'corpus' (1/2)
Generating dataset scifact (/Users/dwadden/.cache/huggingface/datasets/scifact/corpus/1.0.0/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534)
Downloading and preparing dataset scifact/corpus (download: 2.72 MiB, generated: 7.63 MiB, post-processed: Unknown size, total: 10.35 MiB) to /Users/dwadden/.cache/huggingface/datasets/scifact/corpus/1.0.0/2b43b4e125ce3369da7d6353961d9d315e6593f24cc7bbe9ede5e5c911d11534...
Downloading took 0.0 min
Checksum Computation took 0.0 min
Traceback (most recent call last):
File "/Users/dwadden/proj/datasets/datasets-cli", line 36, in <module>
service.run()
File "/Users/dwadden/proj/datasets/src/datasets/commands/test.py", line 139, in run
builder.download_and_prepare(
File "/Users/dwadden/proj/datasets/src/datasets/builder.py", line 562, in download_and_prepare
self._download_and_prepare(
File "/Users/dwadden/proj/datasets/src/datasets/builder.py", line 622, in _download_and_prepare
verify_checksums(
File "/Users/dwadden/proj/datasets/src/datasets/utils/info_utils.py", line 32, in verify_checksums
raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {'https://ai2-s2-scifact.s3-us-west-2.amazonaws.com/release/2020-05-01/data.tar.gz'}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1780/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1780.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1780",
"merged_at": "2021-01-28T10:19:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1780.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1780"
} | true | [
"Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets/scifact --save_infos --all_configs --ignore_verifications\r\n```\r\nThi... |
https://api.github.com/repos/huggingface/datasets/issues/4071 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4071/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4071/comments | https://api.github.com/repos/huggingface/datasets/issues/4071/events | https://github.com/huggingface/datasets/issues/4071 | 1,187,587,683 | I_kwDODunzps5GySZj | 4,071 | Loading issue for xuyeliu/notebookCDG dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 1 | 2022-03-31T06:36:29Z | 2022-03-31T08:17:01Z | 2022-03-31T08:16:16Z | null | ## Dataset viewer issue for '*xuyeliu/notebookCDG*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/xuyeliu/notebookCDG)*
*Couldn't load the xuyeliu/notebookCDG with provided scripts: *
```
from datasets import load_dataset
dataset = load_dataset("xuyeliu/notebookCDG/dataset_notebook.pkl")
```
I get an error message as follows:
FileNotFoundError: Couldn't find a dataset script at /home/code_documentation/code/xuyeliu/notebookCDG/notebookCDG.py or any data file in the same directory. Couldn't find 'xuyeliu/notebookCDG' on the Hugging Face Hub either: FileNotFoundError: Unable to resolve any data file that matches ['**train*'] in dataset repository xuyeliu/notebookCDG with any supported extension ['csv', 'tsv', 'json', 'jsonl', 'parquet', 'txt', 'blp', 'bmp', 'dib', 'bufr', 'cur', 'pcx', 'dcx', 'dds', 'ps', 'eps', 'fit', 'fits', 'fli', 'flc', 'ftc', 'ftu', 'gbr', 'gif', 'grib', 'h5', 'hdf', 'png', 'apng', 'jp2', 'j2k', 'jpc', 'jpf', 'jpx', 'j2c', 'icns', 'ico', 'im', 'iim', 'tif', 'tiff', 'jfif', 'jpe', 'jpg', 'jpeg', 'mpg', 'mpeg', 'msp', 'pcd', 'pxr', 'pbm', 'pgm', 'ppm', 'pnm', 'psd', 'bw', 'rgb', 'rgba', 'sgi', 'ras', 'tga', 'icb', 'vda', 'vst', 'webp', 'wmf', 'emf', 'xbm', 'xpm', 'zip']
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4071/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4071/timeline | null | completed | null | null | false | [
"Hi @Jun-jie-Huang,\r\n\r\nAs the error message says, \".pkl\" data files are not supported.\r\n\r\nIf you would like to share your dataset on the Hub, you would need:\r\n- either to create a Python loading script, that loads the data in any format\r\n- or to transform your data files to one of the supported format... |
https://api.github.com/repos/huggingface/datasets/issues/2396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2396/comments | https://api.github.com/repos/huggingface/datasets/issues/2396/events | https://github.com/huggingface/datasets/issues/2396 | 899,016,308 | MDU6SXNzdWU4OTkwMTYzMDg= | 2,396 | strange datasets from OSCAR corpus | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 2 | 2021-05-23T13:06:02Z | 2021-06-17T13:54:37Z | null | null | 

From the [official site ](https://oscar-corpus.com/), the Yue Chinese dataset should have 2.2KB data.
7 training instances is obviously not a right number.
As I can read Yue Chinese, I call tell the last instance is definitely not something that would appear on Common Crawl.
And even if you don't read Yue Chinese, you can tell the first six instance are problematic.
(It is embarrassing, as the 7 training instances look exactly like something from a pornographic novel or flitting messages in a chat of a dating app)
It might not be the problem of the huggingface/datasets implementation, because when I tried to download the dataset from the official site, I found out that the zip file is corrupted.
I will try to inform the host of OSCAR corpus later.
Awy a remake about this dataset in huggingface/datasets is needed, perhaps after the host of the dataset fixes the issue.
> Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's classifier which we used to create the original corpus. In general the classifier is not really capable of properly recognizing Yue Chineese so the file ends un being just noise from Common Crawl. Some of these problems with OSCAR were already discussed [here](https://arxiv.org/pdf/2103.12028.pdf) but we are working on explicitly documenting the problems by language on our website. In fact, could please you open an issue on [our repo](https://github.com/oscar-corpus/oscar-website/issues) as well so that we can track it?
Thanks a lot, the new post is here:
https://github.com/oscar-corpus/oscar-website/issues/11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2396/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2396/timeline | null | null | null | null | false | [
"Hi ! Thanks for reporting\r\ncc @pjox is this an issue from the data ?\r\n\r\nAnyway we should at least mention that OSCAR could contain such contents in the dataset card, you're totally right @jerryIsHere ",
"Hi @jerryIsHere , sorry for the late response! Sadly this is normal, the problem comes form fasttext's ... |
https://api.github.com/repos/huggingface/datasets/issues/2374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2374/comments | https://api.github.com/repos/huggingface/datasets/issues/2374/events | https://github.com/huggingface/datasets/pull/2374 | 894,579,364 | MDExOlB1bGxSZXF1ZXN0NjQ2OTIyMjkw | 2,374 | add `desc` to `tqdm` in `Dataset.map()` | [] | closed | false | null | 5 | 2021-05-18T16:44:29Z | 2021-05-27T15:44:04Z | 2021-05-26T14:59:21Z | null | Fixes #2330. Please let me know if anything is also required in this | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2374/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2374/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2374.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2374",
"merged_at": "2021-05-26T14:59:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2374.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2374"
} | true | [
"Once this is merged, let's update `transformers` examples to use this new code. As currently all those tqdm bars are who knows what they are....\r\n\r\nhttps://github.com/huggingface/transformers/issues/11797",
"Sure @stas00! Once this is merged let's discuss what all changes can be done on `transformers` side",... |
https://api.github.com/repos/huggingface/datasets/issues/767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/767/comments | https://api.github.com/repos/huggingface/datasets/issues/767/events | https://github.com/huggingface/datasets/issues/767 | 730,771,610 | MDU6SXNzdWU3MzA3NzE2MTA= | 767 | Add option for named splits when using ds.train_test_split | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2020-10-27T19:59:44Z | 2020-11-10T14:05:21Z | null | null | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `train_test_split`, as it'll just overwrite my real `test` split that I intended to keep.
### Workaround
this is my hack for dealin with this, for now :slightly_smiling_face:
```python
from datasets import load_dataset
ds = load_dataset('imdb')
ds['train'], ds['validation'] = ds['train'].train_test_split(.1).values()
```
| {
"+1": 5,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 5,
"url": "https://api.github.com/repos/huggingface/datasets/issues/767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/767/timeline | null | null | null | null | false | [
"Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.\r\n\r\nRelated is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090... |
https://api.github.com/repos/huggingface/datasets/issues/3288 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3288/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3288/comments | https://api.github.com/repos/huggingface/datasets/issues/3288/events | https://github.com/huggingface/datasets/pull/3288 | 1,056,145,703 | PR_kwDODunzps4up6S5 | 3,288 | Allow datasets with indices table when concatenating along axis=1 | [] | closed | false | null | 0 | 2021-11-17T13:41:28Z | 2021-11-17T15:41:12Z | 2021-11-17T15:41:11Z | null | Calls `flatten_indices` on the datasets with indices table in `concatenate_datasets` to fix issues when concatenating along `axis=1`.
cc @lhoestq: I decided to flatten all the datasets instead of flattening all the datasets except the largest one in the end. The latter approach fails on the following example:
```python
a = Dataset.from_dict({"a": [10, 20, 30, 40]})
b = Dataset.from_dict({"b": [10, 20, 30, 40, 50, 60]}) # largest dataset
a = a.select([1, 2, 3])
b = b.select([1, 2, 3])
concatenate_datasets([a, b], axis=1) # fails at line concat_tables(...) because the real length of b's data is 6 and a's length is 3 after flattening (was 4 before flattening)
```
Also, it requires additional re-ordering of indices to prepare them for working with the indices table of the largest dataset. IMO not worth when we save only one `flatten_indices` call. (feel free to check the code of that approach at https://github.com/huggingface/datasets/commit/6acd10481c70950dcfdbfd2bab0bf0c74ad80bcb if you are interested)
Fixes #3273
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3288/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3288/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3288.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3288",
"merged_at": "2021-11-17T15:41:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3288.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3288"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/692/comments | https://api.github.com/repos/huggingface/datasets/issues/692/events | https://github.com/huggingface/datasets/pull/692 | 712,818,968 | MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw | 692 | Update README.md | [] | closed | false | null | 4 | 2020-10-01T12:57:22Z | 2020-10-02T11:01:59Z | 2020-10-02T11:01:59Z | null | {
"+1": 0,
"-1": 4,
"confused": 2,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/692/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/692",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/692"
} | true | [
"Hacktoberfest spam",
"To enhance its readability.....not Hacktoberfest spam",
"How is adding a punctuation to the end of a sentence justified as \"To enhance its readability\". \r\nConsidering that this is not your first \"README enhancement '' please don't spam the open source community with useless PR to get... | |
https://api.github.com/repos/huggingface/datasets/issues/5751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5751/comments | https://api.github.com/repos/huggingface/datasets/issues/5751/events | https://github.com/huggingface/datasets/pull/5751 | 1,668,333,316 | PR_kwDODunzps5OVMuT | 5,751 | Consistent ArrayXD Python formatting + better NumPy/Pandas formatting | [] | closed | false | null | 4 | 2023-04-14T14:13:59Z | 2023-04-20T14:43:20Z | 2023-04-20T14:40:34Z | null | Return a list of lists instead of a list of NumPy arrays when converting the variable-shaped `ArrayXD` to Python. Additionally, improve the NumPy conversion by returning a numeric NumPy array when the offsets are equal or a NumPy object array when they aren't, and allow converting the variable-shaped `ArrayXD` to Pandas.
(Reported in https://github.com/huggingface/datasets/issues/5719#issuecomment-1507579671) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5751/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5751.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5751",
"merged_at": "2023-04-20T14:40:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5751.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5751"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/627/comments | https://api.github.com/repos/huggingface/datasets/issues/627/events | https://github.com/huggingface/datasets/pull/627 | 701,411,661 | MDExOlB1bGxSZXF1ZXN0NDg2ODcxMTg2 | 627 | fix (#619) MLQA features names | [] | closed | false | null | 0 | 2020-09-14T20:41:59Z | 2020-11-02T21:04:32Z | 2020-09-16T06:54:11Z | null | Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/627/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/627",
"merged_at": "2020-09-16T06:54:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/627"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3714/comments | https://api.github.com/repos/huggingface/datasets/issues/3714/events | https://github.com/huggingface/datasets/issues/3714 | 1,136,105,530 | I_kwDODunzps5Dt5g6 | 3,714 | tatoeba_mt: File not found error and key error | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-02-13T16:35:45Z | 2022-02-13T20:44:04Z | 2022-02-13T20:44:04Z | null | ## Dataset viewer issue for 'tatoeba_mt'
**Link:** https://huggingface.co/datasets/Helsinki-NLP/tatoeba_mt
My data loader script does not seem to work.
The files are part of the local repository but cannot be found. An example where it should work is the subset for "afr-eng".
Another problem is that I do not have validation data for all subsets and I don't know how to properly check whether validation exists in the configuration before I try to download it. An example is the subset for "afr-deu".
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3714/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3714/timeline | null | completed | null | null | false | [
"Looks like I solved my problems ..."
] |
https://api.github.com/repos/huggingface/datasets/issues/4292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4292/comments | https://api.github.com/repos/huggingface/datasets/issues/4292/events | https://github.com/huggingface/datasets/pull/4292 | 1,228,216,788 | PR_kwDODunzps43bhrp | 4,292 | Add API code examples for remaining main classes | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-05-06T18:15:31Z | 2022-05-25T18:05:13Z | 2022-05-25T17:56:36Z | null | This PR adds API code examples for the remaining functions in the Main classes. I wasn't too familiar with some of the functions (`decode_batch`, `decode_column`, `decode_example`, etc.) so please feel free to add an example of usage and I can fill in the rest :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4292/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4292",
"merged_at": "2022-05-25T17:56:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4292"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5609 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5609/comments | https://api.github.com/repos/huggingface/datasets/issues/5609/events | https://github.com/huggingface/datasets/issues/5609 | 1,610,062,862 | I_kwDODunzps5f95wO | 5,609 | `load_from_disk` vs `load_dataset` performance. | [] | open | false | null | 4 | 2023-03-05T05:27:15Z | 2023-07-13T18:48:05Z | null | null | ### Describe the bug
I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices:
1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering.
2. `save_to_disk` and then use `load_from_disk` to load the filtered version.
The performance of these two approaches is wildly different:
* Using `load_dataset` takes about 20 seconds to load the dataset, and a few seconds to re-filter (thanks to the brilliant filter/map caching)
* Using `load_from_disk` takes 14 minutes! And the second time I tried, the session just crashed (on a machine with 32GB of RAM)
I don't know if you'd call this a bug, but it seems like there shouldn't need to be two methods to load from disk, or that they should not take such wildly different amounts of time, or that one should not crash. Or maybe that the docs could offer some guidance about when to pick which method and why two methods exist, or just how do most people do it?
Something I couldn't work out from reading the docs was this: can I modify a dataset from the hub, save it (locally) and use `load_dataset` to load it? This [post seemed to suggest that the answer is no](https://discuss.huggingface.co/t/save-and-load-datasets/9260).
### Steps to reproduce the bug
See above
### Expected behavior
Load times should be about the same.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
- Python version: 3.10.8
- PyArrow version: 11.0.0
- Pandas version: 1.5.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5609/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5609/timeline | null | null | null | null | false | [
"Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".",
"Great to hear! I'll give it a try when... |
https://api.github.com/repos/huggingface/datasets/issues/1765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1765/comments | https://api.github.com/repos/huggingface/datasets/issues/1765/events | https://github.com/huggingface/datasets/issues/1765 | 791,553,065 | MDU6SXNzdWU3OTE1NTMwNjU= | 1,765 | Error iterating over Dataset with DataLoader | [] | closed | false | null | 6 | 2021-01-21T22:56:45Z | 2022-10-28T02:16:38Z | 2021-01-23T03:44:14Z | null | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 21365, 4515, 8618, 1113,
102]]),
'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]])}
```
When I try to iterate as in the docs, I get errors:
```
dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)
next(iter(dataloader))
```
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-45-05180ba8aa35> in <module>()
1 dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)
----> 2 next(iter(dataloader))
3 frames
/usr/local/lib/python3.6/dist-packages/torch/utils/data/dataloader.py in __init__(self, loader)
411 self._timeout = loader.timeout
412 self._collate_fn = loader.collate_fn
--> 413 self._sampler_iter = iter(self._index_sampler)
414 self._base_seed = torch.empty((), dtype=torch.int64).random_(generator=loader.generator).item()
415 self._persistent_workers = loader.persistent_workers
TypeError: 'int' object is not iterable
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1765/timeline | null | completed | null | null | false | [
"Instead of:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n```\r\nIt should be:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n```\r\n\r\n`batch_sampler` accepts a Sampler object or an Iterable, so you get an error.",
"@... |
https://api.github.com/repos/huggingface/datasets/issues/6014 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6014/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6014/comments | https://api.github.com/repos/huggingface/datasets/issues/6014/events | https://github.com/huggingface/datasets/issues/6014 | 1,798,213,816 | I_kwDODunzps5rLpC4 | 6,014 | Request to Share/Update Dataset Viewer Code | [] | open | false | null | 6 | 2023-07-11T06:36:09Z | 2023-07-12T14:18:49Z | null | null |
Overview:
The repository (huggingface/datasets-viewer) was recently archived and when I tried to run the code, there was the error message "AttributeError: module 'datasets.load' has no attribute 'prepare_module'". I could not resolve the issue myself due to lack of documentation of that attribute.
Request:
I kindly request the sharing of the code responsible for the dataset preview functionality or help with resolving the error. The dataset viewer on the Hugging Face website is incredibly useful since it is compatible with different types of inputs. It allows users to find datasets that meet their needs more efficiently. If needed, I am willing to contribute to the project by testing, documenting, and providing feedback on the dataset viewer code.
Thank you for considering this request, and I look forward to your response. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6014/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6014/timeline | null | null | null | null | false | [
"Hi ! The huggingface/dataset-viewer code was not maintained anymore because we switched to a new dataset viewer that is deployed available for each dataset the Hugging Face website.\r\n\r\nWhat are you using this old repository for ?",
"I think these parts are outdated:\r\n\r\n* https://github.com/huggingface/da... |
https://api.github.com/repos/huggingface/datasets/issues/791 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/791/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/791/comments | https://api.github.com/repos/huggingface/datasets/issues/791/events | https://github.com/huggingface/datasets/pull/791 | 734,656,518 | MDExOlB1bGxSZXF1ZXN0NTE0MTg0MzU5 | 791 | add amazon reviews | [] | closed | false | null | 3 | 2020-11-02T16:42:57Z | 2020-11-03T20:15:06Z | 2020-11-03T16:43:57Z | null | Adds the Amazon US Reviews dataset as requested in #353. Converted from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/amazon_us_reviews). cc @clmnt @sshleifer | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 2,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/791/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/791/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/791.diff",
"html_url": "https://github.com/huggingface/datasets/pull/791",
"merged_at": "2020-11-03T16:43:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/791.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/791"
} | true | [
"@patrickvonplaten Yeah this is adapted from tfds so a lot is just how they wrote the code. Addressed your comments and also simplified the weird `AmazonUSReviewsConfig` definition. Will merge once tests pass.",
"Thanks for checking this one :) \r\nLooks good to me \r\n\r\nJust one question : is there a particula... |
https://api.github.com/repos/huggingface/datasets/issues/3015 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3015/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3015/comments | https://api.github.com/repos/huggingface/datasets/issues/3015/events | https://github.com/huggingface/datasets/pull/3015 | 1,015,130,845 | PR_kwDODunzps4so0GX | 3,015 | Extend support for streaming datasets that use glob.glob | [] | closed | false | null | 0 | 2021-10-04T12:42:37Z | 2021-10-05T13:46:39Z | 2021-10-05T13:46:38Z | null | This PR extends the support in streaming mode for datasets that use `glob`, by patching the function `glob.glob`.
Related to #2880, #2876, #2874 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3015/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3015/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3015.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3015",
"merged_at": "2021-10-05T13:46:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3015.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3015"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1020/comments | https://api.github.com/repos/huggingface/datasets/issues/1020/events | https://github.com/huggingface/datasets/pull/1020 | 755,601,450 | MDExOlB1bGxSZXF1ZXN0NTMxMjgyODQy | 1,020 | Add Setswana NER | [] | closed | false | null | 0 | 2020-12-02T20:52:07Z | 2020-12-03T14:56:14Z | 2020-12-03T14:56:14Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1020/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1020",
"merged_at": "2020-12-03T14:56:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1020"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/3472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3472/comments | https://api.github.com/repos/huggingface/datasets/issues/3472/events | https://github.com/huggingface/datasets/pull/3472 | 1,086,908,508 | PR_kwDODunzps4wMEwA | 3,472 | Fix `str(Path(...))` conversion in streaming on Linux | [] | closed | false | null | 0 | 2021-12-22T15:06:03Z | 2021-12-22T16:52:53Z | 2021-12-22T16:52:52Z | null | Fix `str(Path(...))` conversion in streaming on Linux. This should fix the streaming of the `beans` and `cats_vs_dogs` datasets. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3472/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3472",
"merged_at": "2021-12-22T16:52:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3472"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2702/comments | https://api.github.com/repos/huggingface/datasets/issues/2702/events | https://github.com/huggingface/datasets/pull/2702 | 950,448,159 | MDExOlB1bGxSZXF1ZXN0Njk0OTkyOTc1 | 2,702 | Update BibTeX entry | [] | closed | false | null | 0 | 2021-07-22T09:04:39Z | 2021-07-22T09:17:39Z | 2021-07-22T09:17:38Z | null | Update BibTeX entry. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2702/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2702/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2702.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2702",
"merged_at": "2021-07-22T09:17:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2702.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2702"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4475 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4475/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4475/comments | https://api.github.com/repos/huggingface/datasets/issues/4475/events | https://github.com/huggingface/datasets/pull/4475 | 1,267,798,451 | PR_kwDODunzps45eufw | 4,475 | Improve error message for missing pacakges from inside dataset script | [] | closed | false | null | 3 | 2022-06-10T16:59:36Z | 2022-10-06T13:46:26Z | 2022-06-13T13:16:43Z | null | Improve the error message for missing packages from inside a dataset script:
With this change, the error message for missing packages for `bigbench` looks as follows:
```
ImportError: To be able to use bigbench, you need to install the following dependencies:
- 'bigbench' using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz"'
```
And this is how it looked before:
```
ImportError: To be able to use bigbench, you need to install the following dependencies['bigbench', 'bigbench', 'bigbench', 'bigbench'] using 'pip install "bigbench @ https://storage.googleapis.com/public_research_data/bigbench/bigbench-0.0.1.tar.gz" bigbench bigbench bigbench' for instance'
``` | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4475/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4475/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4475.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4475",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4475.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4475"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I opened a PR before I noticed yours ^^' You can find it here: https://github.com/huggingface/datasets/pull/4484\r\n\r\nThe only comment I have regarding your message is that it possibly shows several `pip install` commands, whereas ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.