url stringlengths 61 61 | repository_url stringclasses 1 value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 758M 1.95B | node_id stringlengths 18 32 | number int64 1.2k 6.31k | title stringlengths 1 290 | user dict | labels listlengths 0 3 | state stringclasses 2 values | locked bool 1 class | assignee dict | assignees listlengths 0 4 | milestone dict | comments listlengths 0 30 | created_at timestamp[ns, tz=UTC] | updated_at timestamp[ns, tz=UTC] | closed_at timestamp[ns, tz=UTC] | author_association stringclasses 3 values | active_lock_reason float64 | draft float64 0 1 ⌀ | pull_request dict | body stringlengths 0 36.2k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app float64 | state_reason stringclasses 3 values | is_pull_request bool 2 classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6006/comments | https://api.github.com/repos/huggingface/datasets/issues/6006/events | https://github.com/huggingface/datasets/issues/6006 | 1,788,855,582 | I_kwDODunzps5qn8Ue | 6,006 | NotADirectoryError when loading gigawords | {
"avatar_url": "https://avatars.githubusercontent.com/u/115634163?v=4",
"events_url": "https://api.github.com/users/xipq/events{/privacy}",
"followers_url": "https://api.github.com/users/xipq/followers",
"following_url": "https://api.github.com/users/xipq/following{/other_user}",
"gists_url": "https://api.github.com/users/xipq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/xipq",
"id": 115634163,
"login": "xipq",
"node_id": "U_kgDOBuRv8w",
"organizations_url": "https://api.github.com/users/xipq/orgs",
"received_events_url": "https://api.github.com/users/xipq/received_events",
"repos_url": "https://api.github.com/users/xipq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/xipq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/xipq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/xipq"
} | [] | closed | false | null | [] | null | [
"issue due to corrupted download files. resolved after cleaning download cache. sorry for any inconvinence."
] | 2023-07-05T06:23:41Z | 2023-07-05T06:31:02Z | 2023-07-05T06:31:01Z | NONE | null | null | null | ### Describe the bug
got `NotADirectoryError` whtn loading gigawords dataset
### Steps to reproduce the bug
When running
```
import datasets
datasets.load_dataset('gigaword')
```
Got the following exception:
```bash
Traceback (most recent call last): [0/1862]
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1629, in _prepare_split_single
for key, record in generator:
File "/home/x/.cache/huggingface/modules/datasets_modules/datasets/gigaword/ea83a8b819190acac5f2dae011fad51dccf269a0604ec5dd24795b
64efb424b6/gigaword.py", line 115, in _generate_examples
with open(src_path, encoding="utf-8") as f_d, open(tgt_path, encoding="utf-8") as f_s:
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/streaming.py", line 71, in wrapper
return function(*args, use_auth_token=use_auth_token, **kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/download/streaming_download_manager.py", line 493, in xope
n
return open(main_hop, mode, *args, **kwargs)
NotADirectoryError: [Errno 20] Not a directory: '/home/x/.cache/huggingface/datasets/downloads/6da52431bb5124d90cf51a0187d2dbee9046e
89780c4be7599794a4f559048ec/org_data/train.src.txt'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "gigaword.py", line 38, in <module>
main()
File "gigaword.py", line 35, in main
train, dev, test = dataset.generate_k_shot_data(k=32, seed=seed, path="../data/")
File "/home/x/MICL/preprocess/fewshot_gym_dataset.py", line 199, in generate_k_shot_data
dataset = self.load_dataset()
File "gigaword.py", line 29, in load_dataset
return datasets.load_dataset('gigaword')
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/load.py", line 1809, in load_dataset
builder_instance.download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 909, in download_and_prepare
self._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1670, in _download_and_prepare
super()._download_and_prepare(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1004, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1508, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/x/.conda/envs/dataproc/lib/python3.8/site-packages/datasets/builder.py", line 1665, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset
```
### Expected behavior
Download and process the dataset successfully
### Environment info
- `datasets` version: 2.13.1
- Platform: Linux-5.0.0-1032-azure-x86_64-with-glibc2.10
- Python version: 3.8.0
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.1
- Pandas version: 2.0.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6006/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2068 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2068/comments | https://api.github.com/repos/huggingface/datasets/issues/2068/events | https://github.com/huggingface/datasets/issues/2068 | 833,602,832 | MDU6SXNzdWU4MzM2MDI4MzI= | 2,068 | PyTorch not available error on SageMaker GPU docker though it is installed | {
"avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4",
"events_url": "https://api.github.com/users/sivakhno/events{/privacy}",
"followers_url": "https://api.github.com/users/sivakhno/followers",
"following_url": "https://api.github.com/users/sivakhno/following{/other_user}",
"gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/sivakhno",
"id": 1651457,
"login": "sivakhno",
"node_id": "MDQ6VXNlcjE2NTE0NTc=",
"organizations_url": "https://api.github.com/users/sivakhno/orgs",
"received_events_url": "https://api.github.com/users/sivakhno/received_events",
"repos_url": "https://api.github.com/users/sivakhno/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions",
"type": "User",
"url": "https://api.github.com/users/sivakhno"
} | [] | closed | false | null | [] | null | [
"cc @philschmid ",
"Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`",
"Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6... | 2021-03-17T10:04:27Z | 2021-06-14T04:47:30Z | 2021-06-14T04:47:30Z | NONE | null | null | null | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*args, **kwargs)
File "/opt/ml/code/data_module.py", line 103, in setup
self.dataset[split].set_format(type="torch", columns=self.columns)
File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format
_ = get_formatter(type, **format_kwargs)
File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter
raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type]
ValueError: PyTorch needs to be installed to be able to return PyTorch tensors.
```
when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines
```
self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns]
self.dataset[split].set_format(type="torch", columns=self.columns)
```
The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 .
By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`.
Also as a first line in the data loading module I have
```
import os
os.environ["USE_TF"] = "0"
os.environ["USE_TORCH"] = "1"
````
But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack.
Many Thanks!
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2068/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4768/comments | https://api.github.com/repos/huggingface/datasets/issues/4768/events | https://github.com/huggingface/datasets/pull/4768 | 1,321,913,645 | PR_kwDODunzps48TRUH | 4,768 | Unpin rouge_score test dependency | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-29T08:17:40Z | 2022-07-29T16:42:28Z | 2022-07-29T16:29:17Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4768.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4768",
"merged_at": "2022-07-29T16:29:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4768.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4768"
} | Once `rouge-score` has made the 0.1.2 release to fix their issue https://github.com/google-research/google-research/issues/1212, we can unpin it.
Related to:
- #4735 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4768/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5387/comments | https://api.github.com/repos/huggingface/datasets/issues/5387/events | https://github.com/huggingface/datasets/issues/5387 | 1,508,740,177 | I_kwDODunzps5Z7YxR | 5,387 | Missing documentation page : improve-performance | {
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"events_url": "https://api.github.com/users/astariul/events{/privacy}",
"followers_url": "https://api.github.com/users/astariul/followers",
"following_url": "https://api.github.com/users/astariul/following{/other_user}",
"gists_url": "https://api.github.com/users/astariul/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/astariul",
"id": 43774355,
"login": "astariul",
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"organizations_url": "https://api.github.com/users/astariul/orgs",
"received_events_url": "https://api.github.com/users/astariul/received_events",
"repos_url": "https://api.github.com/users/astariul/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/astariul/subscriptions",
"type": "User",
"url": "https://api.github.com/users/astariul"
} | [] | closed | false | null | [] | null | [
"Hi! Our documentation builder does not support links to sections, hence the bug. This is the link it should point to https://huggingface.co/docs/datasets/v2.8.0/en/cache#improve-performance."
] | 2022-12-23T01:12:57Z | 2023-01-24T16:33:40Z | 2023-01-24T16:33:40Z | NONE | null | null | null | ### Describe the bug
Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing.
The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory
### Steps to reproduce the bug
Access the page and see it's missing.
### Expected behavior
Not missing page
### Environment info
Doesn't matter | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5387/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5387/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5618 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5618/comments | https://api.github.com/repos/huggingface/datasets/issues/5618/events | https://github.com/huggingface/datasets/issues/5618 | 1,612,977,934 | I_kwDODunzps5gJBcO | 5,618 | Unpin fsspec < 2023.3.0 once issue fixed | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2023-03-07T08:41:51Z | 2023-03-07T13:39:03Z | 2023-03-07T13:39:03Z | MEMBER | null | null | null | Unpin `fsspec` upper version once root cause of our CI break is fixed.
See:
- #5614 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5618/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5618/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5532 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5532/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5532/comments | https://api.github.com/repos/huggingface/datasets/issues/5532/events | https://github.com/huggingface/datasets/issues/5532 | 1,584,505,128 | I_kwDODunzps5ecaEo | 5,532 | train_test_split in arrow_dataset does not ensure to keep single classes in test set | {
"avatar_url": "https://avatars.githubusercontent.com/u/37191008?v=4",
"events_url": "https://api.github.com/users/Ulipenitz/events{/privacy}",
"followers_url": "https://api.github.com/users/Ulipenitz/followers",
"following_url": "https://api.github.com/users/Ulipenitz/following{/other_user}",
"gists_url": "https://api.github.com/users/Ulipenitz/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ulipenitz",
"id": 37191008,
"login": "Ulipenitz",
"node_id": "MDQ6VXNlcjM3MTkxMDA4",
"organizations_url": "https://api.github.com/users/Ulipenitz/orgs",
"received_events_url": "https://api.github.com/users/Ulipenitz/received_events",
"repos_url": "https://api.github.com/users/Ulipenitz/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ulipenitz/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ulipenitz/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ulipenitz"
} | [] | closed | false | null | [] | null | [
"Hi! You can get this behavior by specifying `stratify_by_column=\"label\"` in `train_test_split`.\r\n\r\nThis is the full example:\r\n```python\r\nimport numpy as np\r\nfrom datasets import Dataset, ClassLabel\r\n\r\ndata = [\r\n {'label': 0, 'text': \"example1\"},\r\n {'label': 1, 'text': \"example2\"},\r\n... | 2023-02-14T16:52:29Z | 2023-02-15T16:09:19Z | 2023-02-15T16:09:19Z | NONE | null | null | null | ### Describe the bug
When I have a dataset with very few (e.g. 1) examples per class and I call the train_test_split function on it, sometimes the single class will be in the test set. thus will never be considered for training.
### Steps to reproduce the bug
```
import numpy as np
from datasets import Dataset
data = [
{'label': 0, 'text': "example1"},
{'label': 1, 'text': "example2"},
{'label': 1, 'text': "example3"},
{'label': 1, 'text': "example4"},
{'label': 0, 'text': "example5"},
{'label': 1, 'text': "example6"},
{'label': 2, 'text': "example7"},
{'label': 2, 'text': "example8"}
]
for _ in range(10):
data_set = Dataset.from_list(data)
data_set = data_set.train_test_split(test_size=0.5)
data_set["train"]
unique_labels_train = np.unique(data_set["train"][:]["label"])
unique_labels_test = np.unique(data_set["test"][:]["label"])
assert len(unique_labels_train) >= len(unique_labels_test)
```
### Expected behavior
I expect to have every available class at least once in my training set.
### Environment info
- `datasets` version: 2.9.0
- Platform: Linux-5.15.65+-x86_64-with-debian-bullseye-sid
- Python version: 3.7.12
- PyArrow version: 11.0.0
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5532/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5532/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2920/comments | https://api.github.com/repos/huggingface/datasets/issues/2920/events | https://github.com/huggingface/datasets/pull/2920 | 997,323,014 | PR_kwDODunzps4ry4_u | 2,920 | Fix unwanted tqdm bar when accessing examples | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-09-15T17:09:11Z | 2021-09-15T17:18:24Z | 2021-09-15T17:18:24Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2920.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2920",
"merged_at": "2021-09-15T17:18:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2920.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2920"
} | A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default
Fix #2919 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2920/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2920/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2929/comments | https://api.github.com/repos/huggingface/datasets/issues/2929/events | https://github.com/huggingface/datasets/pull/2929 | 997,960,024 | PR_kwDODunzps4r015C | 2,929 | Add regression test for null Sequence | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-09-16T08:58:33Z | 2021-09-17T08:23:59Z | 2021-09-17T08:23:59Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2929.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2929",
"merged_at": "2021-09-17T08:23:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2929.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2929"
} | Relates to #2892 and #2900. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2929/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2929/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1882/comments | https://api.github.com/repos/huggingface/datasets/issues/1882/events | https://github.com/huggingface/datasets/pull/1882 | 808,716,576 | MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw | 1,882 | Create Remote Manager | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | null | [
"@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_fil... | 2021-02-15T17:36:24Z | 2022-07-06T15:19:47Z | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1882.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1882",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1882.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1882"
} | Refactoring to separate the concern of remote (HTTP/FTP requests) management. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1882/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4843/comments | https://api.github.com/repos/huggingface/datasets/issues/4843/events | https://github.com/huggingface/datasets/pull/4843 | 1,337,668,699 | PR_kwDODunzps49HaWT | 4,843 | Fix typo in streaming docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/flozi00",
"id": 47894090,
"login": "flozi00",
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"repos_url": "https://api.github.com/users/flozi00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/flozi00"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-08-12T20:18:21Z | 2022-08-14T11:43:30Z | 2022-08-14T11:02:09Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4843.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4843",
"merged_at": "2022-08-14T11:02:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4843.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4843"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4843/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4843/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5249/comments | https://api.github.com/repos/huggingface/datasets/issues/5249/events | https://github.com/huggingface/datasets/issues/5249 | 1,451,692,247 | I_kwDODunzps5WhxDX | 5,249 | Protect the main branch from inadvertent direct pushes | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d4c5f9",
"default": false,
"description": "Maintenance tasks",
"id": 4296013012,
"name": "maintenance",
"node_id": "LA_kwDODunzps8AAAABAA_01A",
"url": "https://api.github.com/repos/huggingface/datasets/labels/maintenance"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"It seems all the tasks have been addressed, meaning this issue can be closed, no?"
] | 2022-11-16T14:19:03Z | 2023-07-21T14:34:44Z | null | MEMBER | null | null | null | We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch.
See context here:
- d7c942228b8dcf4de64b00a3053dce59b335f618
To do:
- [x] Protect main branch
- Settings > Branches > Branch protection rules > main > Edit
- [x] Check: Do not allow bypassing the above settings
- The above settings will apply to administrators and custom roles with the "bypass branch protections" permission.
- [x] Additionally, uncheck: Require approvals [under "Require a pull request before merging", which was already checked]
- Before, we could exceptionally merge a non-approved PR, using Administrator bypass
- Now that Administrator bypass is no longer possible, we would always need an approval to be able to merge; and pull request authors cannot approve their own pull requests. This could be an inconvenient in some exceptional circumstances when an urgent fix is needed
- Nevertheless, although it is no longer enforced, it is strongly recommended to merge PRs only if they have at least one approval
- [ ] #5250
- So that direct pushes to main branch are no longer necessary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5249/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2769 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2769/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2769/comments | https://api.github.com/repos/huggingface/datasets/issues/2769/events | https://github.com/huggingface/datasets/pull/2769 | 963,240,802 | MDExOlB1bGxSZXF1ZXN0NzA1ODk5MTYy | 2,769 | Allow PyArrow from source | {
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/patrickvonplaten",
"id": 23423619,
"login": "patrickvonplaten",
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"type": "User",
"url": "https://api.github.com/users/patrickvonplaten"
} | [] | closed | false | null | [] | null | [] | 2021-08-07T14:26:44Z | 2021-08-09T15:38:39Z | 2021-08-09T15:38:39Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2769.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2769",
"merged_at": "2021-08-09T15:38:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2769.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2769"
} | When installing pyarrow from source the version is:
```python
>>> import pyarrow; pyarrow.__version__
'2.1.0.dev612'
```
-> however this breaks the install check at init of `datasets`. This PR makes sure that everything coming after the last `'.'` is removed. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2769/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2769/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3992 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3992/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3992/comments | https://api.github.com/repos/huggingface/datasets/issues/3992/events | https://github.com/huggingface/datasets/issues/3992 | 1,177,946,153 | I_kwDODunzps5GNggp | 3,992 | Image column is not decoded in map when using with with_transform | {
"avatar_url": "https://avatars.githubusercontent.com/u/5902432?v=4",
"events_url": "https://api.github.com/users/phihung/events{/privacy}",
"followers_url": "https://api.github.com/users/phihung/followers",
"following_url": "https://api.github.com/users/phihung/following{/other_user}",
"gists_url": "https://api.github.com/users/phihung/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/phihung",
"id": 5902432,
"login": "phihung",
"node_id": "MDQ6VXNlcjU5MDI0MzI=",
"organizations_url": "https://api.github.com/users/phihung/orgs",
"received_events_url": "https://api.github.com/users/phihung/received_events",
"repos_url": "https://api.github.com/users/phihung/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/phihung/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/phihung/subscriptions",
"type": "User",
"url": "https://api.github.com/users/phihung"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [
"Hi! This behavior stems from this line: https://github.com/huggingface/datasets/blob/799b817d97590ddc97cbd38d07469403e030de8c/src/datasets/arrow_dataset.py#L1919\r\nBasically, the `Image`/`Audio` columns are decoded only if the `format_type` attribute is `None` (`set_format`/`with_format` and `set_transform`/`with... | 2022-03-23T10:51:13Z | 2022-12-13T16:59:06Z | 2022-12-13T16:59:06Z | NONE | null | null | null | ## Describe the bug
Image column is not _decoded_ in **map** when using with `with_transform`
## Steps to reproduce the bug
```python
from datasets import Image, Dataset
def add_C(batch):
batch["C"] = batch["A"]
return batch
ds = Dataset.from_dict({"A": ["image.png"]}).cast_column("A", Image())
ds = ds.with_transform(lambda x: x) # <= This line causes the problem
ds = ds.map(add_C, batched=True)
print(ds[0])
```
## Expected results
```
{'C': <PIL.PngImagePlugin.PngImageFile>, ...}
```
## Actual results
```
{'C': {'bytes': None, 'path': 'image.png'}, ...}
```
If we remove the `with_transform` line, we get the expected result.
## Environment info
- `datasets` version: 2.0.0
- Platform: Mac OSX
- Python version: 3.8.12
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3992/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3992/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3773 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3773/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3773/comments | https://api.github.com/repos/huggingface/datasets/issues/3773/events | https://github.com/huggingface/datasets/issues/3773 | 1,146,758,335 | I_kwDODunzps5EWiS_ | 3,773 | Checksum mismatch for the reddit_tifu dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/56791604?v=4",
"events_url": "https://api.github.com/users/anna-kay/events{/privacy}",
"followers_url": "https://api.github.com/users/anna-kay/followers",
"following_url": "https://api.github.com/users/anna-kay/following{/other_user}",
"gists_url": "https://api.github.com/users/anna-kay/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/anna-kay",
"id": 56791604,
"login": "anna-kay",
"node_id": "MDQ6VXNlcjU2NzkxNjA0",
"organizations_url": "https://api.github.com/users/anna-kay/orgs",
"received_events_url": "https://api.github.com/users/anna-kay/received_events",
"repos_url": "https://api.github.com/users/anna-kay/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/anna-kay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/anna-kay/subscriptions",
"type": "User",
"url": "https://api.github.com/users/anna-kay"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Thanks for reporting, @anna-kay. We are fixing it.",
"@albertvillanova Thank you for the fast response! However I am still getting the same error:\r\n\r\nDownloading: 2.23kB [00:00, ?B/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Anna\\PycharmProjects\\summarization\\main.py\", line 17, in <mo... | 2022-02-22T10:57:07Z | 2022-02-25T19:27:49Z | 2022-02-22T12:38:44Z | CONTRIBUTOR | null | null | null | ## Describe the bug
A checksum occurs when downloading the reddit_tifu data (both long & short).
## Steps to reproduce the bug
reddit_tifu_dataset = load_dataset('reddit_tifu', 'long')
## Expected results
The expected result is for the dataset to be downloaded and cached locally.
## Actual results
File "/.../lib/python3.9/site-packages/datasets/utils/info_utils.py", line 40, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=1ffWfITKFMJeqjT8loC8aiCLRNJpc_XnF']
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31
- Python version: 3.9.7
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3773/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3773/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2514/comments | https://api.github.com/repos/huggingface/datasets/issues/2514/events | https://github.com/huggingface/datasets/issues/2514 | 924,417,172 | MDU6SXNzdWU5MjQ0MTcxNzI= | 2,514 | Can datasets remove duplicated rows? | {
"avatar_url": "https://avatars.githubusercontent.com/u/16516583?v=4",
"events_url": "https://api.github.com/users/liuxinglan/events{/privacy}",
"followers_url": "https://api.github.com/users/liuxinglan/followers",
"following_url": "https://api.github.com/users/liuxinglan/following{/other_user}",
"gists_url": "https://api.github.com/users/liuxinglan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/liuxinglan",
"id": 16516583,
"login": "liuxinglan",
"node_id": "MDQ6VXNlcjE2NTE2NTgz",
"organizations_url": "https://api.github.com/users/liuxinglan/orgs",
"received_events_url": "https://api.github.com/users/liuxinglan/received_events",
"repos_url": "https://api.github.com/users/liuxinglan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/liuxinglan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/liuxinglan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/liuxinglan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"Hi ! For now this is probably the best option.\r\nWe might add a feature like this in the feature as well.\r\n\r\nDo you know any deduplication method that works on arbitrary big datasets without filling up RAM ?\r\nOtherwise we can have do the deduplication in memory like pandas but I feel like this is going to b... | 2021-06-17T23:35:38Z | 2022-09-10T14:43:26Z | null | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
i find myself more and more relying on datasets just to do all the preprocessing. One thing however, for removing duplicated rows, I couldn't find out how and am always converting datasets to pandas to do that..
**Describe the solution you'd like**
have a functionality of " remove duplicated rows"
**Describe alternatives you've considered**
convert dataset to pandas, remove duplicate, and convert back...
**Additional context**
no | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2514/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2514/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2227/comments | https://api.github.com/repos/huggingface/datasets/issues/2227/events | https://github.com/huggingface/datasets/pull/2227 | 859,771,526 | MDExOlB1bGxSZXF1ZXN0NjE2Nzk1NjMx | 2,227 | Use update_metadata_with_features decorator in class_encode_column method | {
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"events_url": "https://api.github.com/users/SBrandeis/events{/privacy}",
"followers_url": "https://api.github.com/users/SBrandeis/followers",
"following_url": "https://api.github.com/users/SBrandeis/following{/other_user}",
"gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/SBrandeis",
"id": 33657802,
"login": "SBrandeis",
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"organizations_url": "https://api.github.com/users/SBrandeis/orgs",
"received_events_url": "https://api.github.com/users/SBrandeis/received_events",
"repos_url": "https://api.github.com/users/SBrandeis/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions",
"type": "User",
"url": "https://api.github.com/users/SBrandeis"
} | [] | closed | false | null | [] | null | [] | 2021-04-16T12:31:41Z | 2021-04-16T13:49:40Z | 2021-04-16T13:49:39Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2227",
"merged_at": "2021-04-16T13:49:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2227"
} | Following @mariosasko 's comment | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2227/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2244/comments | https://api.github.com/repos/huggingface/datasets/issues/2244/events | https://github.com/huggingface/datasets/pull/2244 | 863,029,946 | MDExOlB1bGxSZXF1ZXN0NjE5NTAyODc0 | 2,244 | Set specific cache directories per test function call | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | open | false | null | [] | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-30T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/8",
"id": 6968069,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/8/labels",
"node_id": "MI_kwDODunzps4AalMF",
"number": 8,
"open_issues": 4,
"state": "open",
"title": "1.12",
"updated_at": "2021-10-13T10:26:33Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/8"
} | [
"@lhoestq, I think this reaches some memory limit on Linux instances... (?)",
"It looks like the `comet` metric test fails because it tries to load a model in memory.\r\nIn the tests I think we have `patch_comet` that mocks the model download + inference. Not sure why it didn't work though.\r\nI can take a look t... | 2021-04-20T17:06:22Z | 2022-07-06T15:19:48Z | null | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2244.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2244",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2244.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2244"
} | Implement specific cache directories (datasets, metrics and modules) per test function call.
Currently, the cache directories are set within the temporary test directory, but they are shared across all test function calls.
This PR implements specific cache directories for each test function call, so that tests are atomic and there are no side effects.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2244/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2244/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3229/comments | https://api.github.com/repos/huggingface/datasets/issues/3229/events | https://github.com/huggingface/datasets/pull/3229 | 1,046,706,425 | PR_kwDODunzps4uMKsx | 3,229 | Fix URL in CITATION file | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-11-07T10:04:35Z | 2021-11-07T10:04:46Z | 2021-11-07T10:04:45Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3229.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3229",
"merged_at": "2021-11-07T10:04:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3229.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3229"
} | Currently the BibTeX citation parsed from the CITATION file has wrong URL (it shows the repo URL instead of the proceedings paper URL):
```
@inproceedings{Lhoest_Datasets_A_Community_2021,
author = {Lhoest, Quentin and Villanova del Moral, Albert and von Platen, Patrick and Wolf, Thomas and Šaško, Mario and Jernite, Yacine and Thakur, Abhishek and Tunstall, Lewis and Patil, Suraj and Drame, Mariama and Chaumond, Julien and Plu, Julien and Davison, Joe and Brandeis, Simon and Sanh, Victor and Le Scao, Teven and Canwen Xu, Kevin and Patry, Nicolas and Liu, Steven and McMillan-Major, Angelina and Schmid, Philipp and Gugger, Sylvain and Raw, Nathan and Lesage, Sylvain and Lozhkov, Anton and Carrigan, Matthew and Matussière, Théo and von Werra, Leandro and Debut, Lysandre and Bekman, Stas and Delangue, Clément},
booktitle = {Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
month = {11},
pages = {175--184},
publisher = {Association for Computational Linguistics},
title = {{Datasets: A Community Library for Natural Language Processing}},
url = {https://github.com/huggingface/datasets},
year = {2021}
}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3229/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3229/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3608/comments | https://api.github.com/repos/huggingface/datasets/issues/3608/events | https://github.com/huggingface/datasets/issues/3608 | 1,109,310,981 | I_kwDODunzps5CHr4F | 3,608 | Add support for continuous metrics (RMSE, MAE) | {
"avatar_url": "https://avatars.githubusercontent.com/u/50770?v=4",
"events_url": "https://api.github.com/users/ck37/events{/privacy}",
"followers_url": "https://api.github.com/users/ck37/followers",
"following_url": "https://api.github.com/users/ck37/following{/other_user}",
"gists_url": "https://api.github.com/users/ck37/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ck37",
"id": 50770,
"login": "ck37",
"node_id": "MDQ6VXNlcjUwNzcw",
"organizations_url": "https://api.github.com/users/ck37/orgs",
"received_events_url": "https://api.github.com/users/ck37/received_events",
"repos_url": "https://api.github.com/users/ck37/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ck37/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ck37/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ck37"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "7057ff",
"default": true... | closed | false | null | [] | null | [
"Hey @ck37 \r\n\r\nYou can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script).\r\n\r\nIf this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stabl... | 2022-01-20T13:35:36Z | 2022-03-09T17:18:20Z | 2022-03-09T17:18:20Z | NONE | null | null | null | **Is your feature request related to a problem? Please describe.**
I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP models conduct regression rather than classification, so binary metrics are not relevant. The only continuous metrics available at https://huggingface.co/metrics are pearson & spearman correlation, which don't ensure that the prediction is on the same scale as the outcome.
**Describe the solution you'd like**
I would like to be able to tag our models on the Hub with the following metrics:
- RMSE
- MAE
**Describe alternatives you've considered**
I don't know if there are any alternatives.
**Additional context**
Our preprint is available here: https://arxiv.org/abs/2009.10277 . We are making it available for use in Jigsaw's Toxic Severity Rating Kaggle competition: https://www.kaggle.com/c/jigsaw-toxic-severity-rating/overview . I have our first model uploaded to the Hub at https://huggingface.co/ucberkeley-dlab/hate-measure-roberta-large
Thanks,
Chris
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3608/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4922/comments | https://api.github.com/repos/huggingface/datasets/issues/4922/events | https://github.com/huggingface/datasets/issues/4922 | 1,357,684,018 | I_kwDODunzps5Q7J0y | 4,922 | I/O error on Google Colab in streaming mode | {
"avatar_url": "https://avatars.githubusercontent.com/u/5595043?v=4",
"events_url": "https://api.github.com/users/jotterbach/events{/privacy}",
"followers_url": "https://api.github.com/users/jotterbach/followers",
"following_url": "https://api.github.com/users/jotterbach/following{/other_user}",
"gists_url": "https://api.github.com/users/jotterbach/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jotterbach",
"id": 5595043,
"login": "jotterbach",
"node_id": "MDQ6VXNlcjU1OTUwNDM=",
"organizations_url": "https://api.github.com/users/jotterbach/orgs",
"received_events_url": "https://api.github.com/users/jotterbach/received_events",
"repos_url": "https://api.github.com/users/jotterbach/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jotterbach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jotterbach/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jotterbach"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [] | 2022-08-31T18:08:26Z | 2022-08-31T18:15:48Z | 2022-08-31T18:15:48Z | NONE | null | null | null | ## Describe the bug
When trying to load a streaming dataset in Google Colab the loading fails with an I/O error
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
list(hf_ds.take(5))
```
## Expected results
It should load five data points
## Actual results
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module>
2 from datasets import load_dataset
3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
----> 4 list(hf_ds.take(5))
6 frames
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
716
717 def __iter__(self):
--> 718 for key, example in self._iter():
719 if self.features:
720 # `IterableDataset` automatically fills missing columns with None.
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self)
706 else:
707 ex_iterable = self._ex_iterable
--> 708 yield from ex_iterable
709
710 def _iter_shard(self, shard_idx: int):
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
582
583 def __iter__(self):
--> 584 yield from islice(self.ex_iterable, self.n)
585
586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable":
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
110
111 def __iter__(self):
--> 112 yield from self.generate_examples_fn(**self.kwargs)
113
114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable":
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation)
845 raise ValueError("Invalid number of files: %d" % len(files))
846
--> 847 for sub_key, ex in sub_generator(*sub_generator_args):
848 if not all(ex.values()):
849 continue
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2)
923 l2_sentences, l2 = parse_file(f2_i, filename2)
924
--> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):
926 key = f"{f_id}/{line_id}"
927 yield key, {l1: s1, l2: s2}
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen()
895
896 def gen():
--> 897 with open(path, encoding="utf-8") as f:
898 for line in f:
899 seg_match = re.match(seg_re, line)
ValueError: I/O operation on closed file.
```
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0)
- Pandas version: 1.3.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4922/timeline | null | not_planned | false |
https://api.github.com/repos/huggingface/datasets/issues/1222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1222/comments | https://api.github.com/repos/huggingface/datasets/issues/1222/events | https://github.com/huggingface/datasets/pull/1222 | 758,018,953 | MDExOlB1bGxSZXF1ZXN0NTMzMjYzODIx | 1,222 | Add numeric fused head dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}",
"followers_url": "https://api.github.com/users/ghomasHudson/followers",
"following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}",
"gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ghomasHudson",
"id": 13795113,
"login": "ghomasHudson",
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"organizations_url": "https://api.github.com/users/ghomasHudson/orgs",
"received_events_url": "https://api.github.com/users/ghomasHudson/received_events",
"repos_url": "https://api.github.com/users/ghomasHudson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ghomasHudson"
} | [] | closed | false | null | [] | null | [
"> Thanks for adding this @ghomasHudson!\r\n> I added some comments for some of the fields.\r\n> \r\n> Also, I'm not sure about this since I haven't used the library yet, but maybe it's worth adding the identification and resolution as two separate datasets?\r\n\r\nThanks for replying @yanaiela - I hope this will m... | 2020-12-06T20:46:53Z | 2020-12-08T11:17:56Z | 2020-12-08T11:17:55Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1222.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1222",
"merged_at": "2020-12-08T11:17:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1222.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1222"
} | Adding the [NFH: Numeric Fused Head](https://nlp.biu.ac.il/~lazary/fh/) dataset.
Everything looks sensible and I've included both the identification and resolution tasks. I haven't personally used this dataset in my research so am unable to specify what the default configuration / supervised keys should be.
I've filled out the basic info on the model card to the best of my knowledge but it's a little tricky to understand exactly what the fields represent.
Dataset author: @yanaiela | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1222/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4235/comments | https://api.github.com/repos/huggingface/datasets/issues/4235/events | https://github.com/huggingface/datasets/issues/4235 | 1,216,952,640 | I_kwDODunzps5IiTlA | 4,235 | How to load VERY LARGE dataset? | {
"avatar_url": "https://avatars.githubusercontent.com/u/45160643?v=4",
"events_url": "https://api.github.com/users/CaoYiqingT/events{/privacy}",
"followers_url": "https://api.github.com/users/CaoYiqingT/followers",
"following_url": "https://api.github.com/users/CaoYiqingT/following{/other_user}",
"gists_url": "https://api.github.com/users/CaoYiqingT/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/CaoYiqingT",
"id": 45160643,
"login": "CaoYiqingT",
"node_id": "MDQ6VXNlcjQ1MTYwNjQz",
"organizations_url": "https://api.github.com/users/CaoYiqingT/orgs",
"received_events_url": "https://api.github.com/users/CaoYiqingT/received_events",
"repos_url": "https://api.github.com/users/CaoYiqingT/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/CaoYiqingT/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CaoYiqingT/subscriptions",
"type": "User",
"url": "https://api.github.com/users/CaoYiqingT"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"The `Trainer` support `IterableDataset`, not just datasets."
] | 2022-04-27T07:50:13Z | 2023-07-25T15:07:57Z | 2023-07-25T15:07:57Z | NONE | null | null | null | ### System Info
```shell
I am using transformer trainer while meeting the issue.
The trainer requests torch.utils.data.Dataset as input, which loads the whole dataset into the memory at once. Therefore, when the dataset is too large to load, there's nothing I can do except using IterDataset, which loads samples of data seperately, and results in low efficiency.
I wonder if there are any tricks like Sharding in huggingface trainer.
Looking forward to your reply.
```
### Who can help?
Trainer: @sgugger
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
None
### Expected behavior
```shell
I wonder if there are any tricks like fairseq Sharding very large datasets https://fairseq.readthedocs.io/en/latest/getting_started.html.
Thanks a lot!
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4235/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4235/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4016 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4016/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4016/comments | https://api.github.com/repos/huggingface/datasets/issues/4016/events | https://github.com/huggingface/datasets/pull/4016 | 1,180,557,828 | PR_kwDODunzps41AWBk | 4,016 | Support streaming blimp dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-25T09:39:10Z | 2022-03-25T11:19:18Z | 2022-03-25T11:14:13Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4016.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4016",
"merged_at": "2022-03-25T11:14:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4016.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4016"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4016/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4016/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3090 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3090/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3090/comments | https://api.github.com/repos/huggingface/datasets/issues/3090/events | https://github.com/huggingface/datasets/pull/3090 | 1,027,100,371 | PR_kwDODunzps4tPEtH | 3,090 | Update BibTeX entry | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-10-15T05:39:27Z | 2021-10-15T07:35:57Z | 2021-10-15T07:35:57Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3090.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3090",
"merged_at": "2021-10-15T07:35:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3090.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3090"
} | Update BibTeX entry. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3090/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3090/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6218/comments | https://api.github.com/repos/huggingface/datasets/issues/6218/events | https://github.com/huggingface/datasets/pull/6218 | 1,883,734,000 | PR_kwDODunzps5Zqw3Y | 6,218 | Rename old push_to_hub configs to "default" in dataset_infos | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | 2023-09-06T10:40:05Z | 2023-09-07T08:31:29Z | 2023-09-06T11:23:56Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6218.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6218",
"merged_at": "2023-09-06T11:23:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6218.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6218"
} | Fix
```python
from datasets import load_dataset_builder
b = load_dataset_builder("lambdalabs/pokemon-blip-captions", "default")
print(b.info)
```
which should return
```
DatasetInfo(
features={'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None)},
dataset_name='pokemon-blip-captions',
config_name='default',
version=0.0.0,
splits={'train': SplitInfo(name='train', num_bytes=119417410.0, num_examples=833, shard_lengths=None, dataset_name='pokemon-blip-captions')},
download_checksums=None,
download_size=99672355,
dataset_size=119417410.0,
size_in_bytes=219089765.0,
...
)
```
instead of and empty dataset info.
The dataset has a dataset_infos.json file with a deprecated config name "lambdalabs--pokemon-blip-captions". We switched those config names to "default" in 2.14, so the builder.info should take this into account. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6218/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1404/comments | https://api.github.com/repos/huggingface/datasets/issues/1404/events | https://github.com/huggingface/datasets/pull/1404 | 760,575,473 | MDExOlB1bGxSZXF1ZXN0NTM1Mzg0NzEz | 1,404 | Add Acronym Identification Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [] | closed | false | null | [] | null | [
"fixed @lhoestq "
] | 2020-12-09T18:38:54Z | 2020-12-14T13:12:01Z | 2020-12-14T13:12:00Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1404.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1404",
"merged_at": "2020-12-14T13:12:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1404.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1404"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1404/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/5900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5900/comments | https://api.github.com/repos/huggingface/datasets/issues/5900/events | https://github.com/huggingface/datasets/pull/5900 | 1,727,129,617 | PR_kwDODunzps5RahTR | 5,900 | Fix minor typo in docs loading.mdx | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-05-26T08:10:54Z | 2023-05-26T09:34:15Z | 2023-05-26T09:25:12Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5900.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5900",
"merged_at": "2023-05-26T09:25:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5900.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5900"
} | Minor fix. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5900/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5900/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3139/comments | https://api.github.com/repos/huggingface/datasets/issues/3139/events | https://github.com/huggingface/datasets/issues/3139 | 1,033,524,079 | I_kwDODunzps49mlNv | 3,139 | Fix file/directory deletion on Windows | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
... | null | [] | 2021-10-22T12:22:08Z | 2021-10-22T12:22:08Z | null | CONTRIBUTOR | null | null | null | Currently, on Windows, some attempts to delete a dataset file/directory will fail with the `PerimissionError`.
Examples:
- download a dataset, then force redownload it in the same session while keeping a reference to the downloaded dataset
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset = load_dataset("sst", split="train", download_mode="force_redownload")
```
- try to clean up the cache files while keeping a reference to those files (via the mapped dataset):
```python
from datasets import load_dataset
dset = load_dataset("sst", split="train")
dset_mapped = dset.map(lambda _: {"dummy_col": 1})
dset.cleanup_cache_files()
```
We should fix those.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3139/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3139/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2710/comments | https://api.github.com/repos/huggingface/datasets/issues/2710/events | https://github.com/huggingface/datasets/pull/2710 | 951,723,326 | MDExOlB1bGxSZXF1ZXN0Njk2MDYyNjAy | 2,710 | Update WikiANN data URL | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"We have to update the URL in the XTREME benchmark as well:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0dfc639cec450ed8762a997789a2ed63e63cdcf2/datasets/xtreme/xtreme.py#L411-L411\r\n\r\n"
] | 2021-07-23T16:29:21Z | 2021-07-26T09:34:23Z | 2021-07-26T09:34:23Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2710",
"merged_at": "2021-07-26T09:34:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2710"
} | WikiANN data source URL is no longer accessible: 404 error from Dropbox.
We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card.
Close #2691. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2710/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3655/comments | https://api.github.com/repos/huggingface/datasets/issues/3655/events | https://github.com/huggingface/datasets/issues/3655 | 1,119,801,077 | I_kwDODunzps5Cvs71 | 3,655 | Pubmed dataset not reachable | {
"avatar_url": "https://avatars.githubusercontent.com/u/77638579?v=4",
"events_url": "https://api.github.com/users/abhi-mosaic/events{/privacy}",
"followers_url": "https://api.github.com/users/abhi-mosaic/followers",
"following_url": "https://api.github.com/users/abhi-mosaic/following{/other_user}",
"gists_url": "https://api.github.com/users/abhi-mosaic/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhi-mosaic",
"id": 77638579,
"login": "abhi-mosaic",
"node_id": "MDQ6VXNlcjc3NjM4NTc5",
"organizations_url": "https://api.github.com/users/abhi-mosaic/orgs",
"received_events_url": "https://api.github.com/users/abhi-mosaic/received_events",
"repos_url": "https://api.github.com/users/abhi-mosaic/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhi-mosaic/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhi-mosaic/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhi-mosaic"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Hi @abhi-mosaic, thanks for reporting.\r\n\r\nI'm looking at it... ",
"also hitting this issue",
"Hey @albertvillanova, sorry to reopen this... I can confirm that on `master` branch the dataset is downloadable now but it is still broken in streaming mode:\r\n\r\n```python\r\n >>> import datasets\r\n >>> pubmed... | 2022-01-31T18:45:47Z | 2022-12-19T19:18:10Z | 2022-02-14T14:15:41Z | CONTRIBUTOR | null | null | null | ## Describe the bug
Trying to use the `pubmed` dataset fails to reach / download the source files.
## Steps to reproduce the bug
```python
pubmed_train = datasets.load_dataset('pubmed', split='train')
```
## Expected results
Should begin downloading the pubmed dataset.
## Actual results
```
ConnectionError: Couldn't reach ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz (InvalidSchema("No connection adapters were found for 'ftp://ftp.ncbi.nlm.nih.gov/pubmed/baseline/pubmed21n0865.xml.gz'"))
```
## Environment info
- `datasets` version: 1.18.2
- Platform: macOS-11.4-x86_64-i386-64bit
- Python version: 3.8.2
- PyArrow version: 6.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3655/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3655/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2237/comments | https://api.github.com/repos/huggingface/datasets/issues/2237/events | https://github.com/huggingface/datasets/issues/2237 | 861,427,439 | MDU6SXNzdWU4NjE0Mjc0Mzk= | 2,237 | Update Dataset.dataset_size after transformed with map | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"@albertvillanova I would like to take this up. It would be great if you could point me as to how the dataset size is calculated in HF. Thanks!"
] | 2021-04-19T15:19:38Z | 2021-04-20T14:22:05Z | null | MEMBER | null | null | null | After loading a dataset, if we transform it by using `.map` its `dataset_size` attirbute is not updated. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2237/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2237/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2846/comments | https://api.github.com/repos/huggingface/datasets/issues/2846/events | https://github.com/huggingface/datasets/issues/2846 | 981,587,590 | MDU6SXNzdWU5ODE1ODc1OTA= | 2,846 | Negative timezone | {
"avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4",
"events_url": "https://api.github.com/users/jadermcs/events{/privacy}",
"followers_url": "https://api.github.com/users/jadermcs/followers",
"following_url": "https://api.github.com/users/jadermcs/following{/other_user}",
"gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jadermcs",
"id": 7156771,
"login": "jadermcs",
"node_id": "MDQ6VXNlcjcxNTY3NzE=",
"organizations_url": "https://api.github.com/users/jadermcs/orgs",
"received_events_url": "https://api.github.com/users/jadermcs/received_events",
"repos_url": "https://api.github.com/users/jadermcs/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jadermcs"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Fixed by #2847."
] | 2021-08-27T20:50:33Z | 2021-09-10T11:51:07Z | 2021-09-10T11:51:07Z | CONTRIBUTOR | null | null | null | ## Describe the bug
The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex:
```
"^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$"
```
So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files.
## Steps to reproduce the bug
```python
# Where the timestamp column has a tz of -03:00
datasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files,
'test': test_files}, cache_dir="./cache_teste/")
```
## Expected results
The -03:00 is a valid tz so the regex should accept this without raising an error.
## Actual results
As this regex disaproves a valid tz it raises the following error:
```python
raise ValueError(
f"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp."
f"Examples include timestamp[us] or timestamp[us, tz=America/New_York]"
f"See: https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp"
)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.11.0
- Platform: Ubuntu 20.04
- Python version: 3.8
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2846/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4152/comments | https://api.github.com/repos/huggingface/datasets/issues/4152/events | https://github.com/huggingface/datasets/issues/4152 | 1,202,034,115 | I_kwDODunzps5HpZXD | 4,152 | ArrayND error in pyarrow 5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Where do we bump the required pyarrow version? Any inputs on how I fix this issue? ",
"We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci/config.yaml` and `.github/workflows/benchmarks.yaml`"
] | 2022-04-12T15:41:40Z | 2022-05-04T09:29:46Z | 2022-05-04T09:29:46Z | MEMBER | null | null | null | As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(arr, feature_type)
```
raises
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-8-04610f9fa78c> in <module>
----> 1 cast_array_to_feature(pa.array([[[0]]]), Array2D(shape=(1, 1), dtype="int32"))
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in cast_array_to_feature(array, feature, allow_number_to_str)
1806 return array_cast(array, get_nested_type(feature), allow_number_to_str=allow_number_to_str)
1807 elif not isinstance(feature, (Sequence, dict, list, tuple)):
-> 1808 return array_cast(array, feature(), allow_number_to_str=allow_number_to_str)
1809 raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
1810
~/Desktop/hf/datasets/src/datasets/table.py in wrapper(array, *args, **kwargs)
1672 return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
1673 else:
-> 1674 return func(array, *args, **kwargs)
1675
1676 return wrapper
~/Desktop/hf/datasets/src/datasets/table.py in array_cast(array, pa_type, allow_number_to_str)
1705 array = array.storage
1706 if isinstance(pa_type, pa.ExtensionType):
-> 1707 return pa_type.wrap_array(array)
1708 elif pa.types.is_struct(array.type):
1709 if pa.types.is_struct(pa_type) and (
AttributeError: 'Array2DExtensionType' object has no attribute 'wrap_array'
```
The thing is that `cast_array_to_feature` is called when writing an Arrow file, so creating an Arrow dataset using any ArrayND type currently fails.
`wrap_array` has been added in pyarrow 6, so we can either bump the required pyarrow version or fix this for pyarrow 5 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4152/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4152/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3849/comments | https://api.github.com/repos/huggingface/datasets/issues/3849/events | https://github.com/huggingface/datasets/pull/3849 | 1,162,091,075 | PR_kwDODunzps40E6sW | 3,849 | Add "Adversarial GLUE" dataset to datasets library | {
"avatar_url": "https://avatars.githubusercontent.com/u/13238952?v=4",
"events_url": "https://api.github.com/users/jxmorris12/events{/privacy}",
"followers_url": "https://api.github.com/users/jxmorris12/followers",
"following_url": "https://api.github.com/users/jxmorris12/following{/other_user}",
"gists_url": "https://api.github.com/users/jxmorris12/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jxmorris12",
"id": 13238952,
"login": "jxmorris12",
"node_id": "MDQ6VXNlcjEzMjM4OTUy",
"organizations_url": "https://api.github.com/users/jxmorris12/orgs",
"received_events_url": "https://api.github.com/users/jxmorris12/received_events",
"repos_url": "https://api.github.com/users/jxmorris12/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jxmorris12/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jxmorris12/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jxmorris12"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq can you review when you have some time?",
"Hi @lhoestq -- thanks so much for your review! I just added the stuff you requested to the README.md, including an example from the dataset, the table of contents, and lots of sec... | 2022-03-08T00:47:11Z | 2022-03-28T11:17:14Z | 2022-03-28T11:12:04Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3849.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3849",
"merged_at": "2022-03-28T11:12:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3849.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3849"
} | Adds the Adversarial GLUE dataset: https://adversarialglue.github.io/
```python
>>> import datasets
>>> >>> datasets.load_dataset('adv_glue')
Using the latest cached version of the module from /home/jxm3/.cache/huggingface/modules/datasets_modules/datasets/adv_glue/26709a83facad2830d72d4419dd179c0be092f4ad3303ad0ebe815d0cdba5cb4 (last modified on Mon Mar 7 19:19:48 2022) since it couldn't be found locally at adv_glue., or remotely on the Hugging Face Hub.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jxm3/random/datasets/src/datasets/load.py", line 1657, in load_dataset
builder_instance = load_dataset_builder(
File "/home/jxm3/random/datasets/src/datasets/load.py", line 1510, in load_dataset_builder
builder_instance: DatasetBuilder = builder_cls(
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 1021, in __init__
super().__init__(*args, **kwargs)
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 258, in __init__
self.config, self.config_id = self._create_builder_config(
File "/home/jxm3/random/datasets/src/datasets/builder.py", line 337, in _create_builder_config
raise ValueError(
ValueError: Config name is missing.
Please pick one among the available configs: ['adv_sst2', 'adv_qqp', 'adv_mnli', 'adv_mnli_mismatched', 'adv_qnli', 'adv_rte']
Example of usage:
`load_dataset('adv_glue', 'adv_sst2')`
>>> datasets.load_dataset('adv_glue', 'adv_sst2')['validation'][0]
Reusing dataset adv_glue (/home/jxm3/.cache/huggingface/datasets/adv_glue/adv_sst2/1.0.0/3719a903f606f2c96654d87b421bc01114c37084057cdccae65cd7bc24b10933)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 604.11it/s]
{'sentence': "it 's an uneven treat that bores fun at the democratic exercise while also examining its significance for those who take part .", 'label': 1, 'idx': 0}
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3849/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3849/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2830/comments | https://api.github.com/repos/huggingface/datasets/issues/2830/events | https://github.com/huggingface/datasets/pull/2830 | 977,563,947 | MDExOlB1bGxSZXF1ZXN0NzE4MjkyMTM2 | 2,830 | Add imagefolder dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/32437151?v=4",
"events_url": "https://api.github.com/users/nateraw/events{/privacy}",
"followers_url": "https://api.github.com/users/nateraw/followers",
"following_url": "https://api.github.com/users/nateraw/following{/other_user}",
"gists_url": "https://api.github.com/users/nateraw/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nateraw",
"id": 32437151,
"login": "nateraw",
"node_id": "MDQ6VXNlcjMyNDM3MTUx",
"organizations_url": "https://api.github.com/users/nateraw/orgs",
"received_events_url": "https://api.github.com/users/nateraw/received_events",
"repos_url": "https://api.github.com/users/nateraw/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nateraw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nateraw/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nateraw"
} | [] | closed | false | null | [] | null | [
"@lhoestq @albertvillanova it would be super cool if we could get the Image Classification task to work with this. I'm not sure how to have the dataset find the unique label names _after_ the dataset has been loaded. Is that even possible? \r\n\r\nMy hacky community version [here](https://huggingface.co/datasets/na... | 2021-08-23T23:34:06Z | 2022-03-01T16:29:44Z | 2022-03-01T16:29:44Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2830",
"merged_at": "2022-03-01T16:29:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2830"
} | A generic imagefolder dataset inspired by `torchvision.datasets.ImageFolder`.
Resolves #2508
---
Example Usage:
[](https://colab.research.google.com/gist/nateraw/954fa8cba4ff806f6147a782fa9efd1a/imagefolder-official-example.ipynb) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2830/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4026/comments | https://api.github.com/repos/huggingface/datasets/issues/4026/events | https://github.com/huggingface/datasets/pull/4026 | 1,180,968,774 | PR_kwDODunzps41Btcm | 4,026 | Support streaming xtreme dataset for bucc18 config | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-03-25T16:00:40Z | 2022-03-25T16:26:50Z | 2022-03-25T16:21:52Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4026.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4026",
"merged_at": "2022-03-25T16:21:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4026.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4026"
} | Support streaming xtreme dataset for bucc18 config. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4026/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4250 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4250/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4250/comments | https://api.github.com/repos/huggingface/datasets/issues/4250/events | https://github.com/huggingface/datasets/pull/4250 | 1,219,093,830 | PR_kwDODunzps429yjN | 4,250 | Bump PyArrow Version to 6 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4",
"events_url": "https://api.github.com/users/dnaveenr/events{/privacy}",
"followers_url": "https://api.github.com/users/dnaveenr/followers",
"following_url": "https://api.github.com/users/dnaveenr/following{/other_user}",
"gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dnaveenr",
"id": 17746528,
"login": "dnaveenr",
"node_id": "MDQ6VXNlcjE3NzQ2NTI4",
"organizations_url": "https://api.github.com/users/dnaveenr/orgs",
"received_events_url": "https://api.github.com/users/dnaveenr/received_events",
"repos_url": "https://api.github.com/users/dnaveenr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dnaveenr"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Updated meta.yaml as well. Thanks.",
"I'm OK with bumping PyArrow to version 6 to match the version in Colab, but maybe a better solution would be to stop using extension types in our codebase to avoid similar issues.",
"> but ma... | 2022-04-28T18:10:50Z | 2022-05-04T09:36:52Z | 2022-05-04T09:29:46Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4250.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4250",
"merged_at": "2022-05-04T09:29:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4250.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4250"
} | Fixes #4152
This PR updates the PyArrow version to 6 in setup.py, CI job files .circleci/config.yaml and .github/workflows/benchmarks.yaml files.
This will fix ArrayND error which exists in pyarrow 5. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4250/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4250/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4682/comments | https://api.github.com/repos/huggingface/datasets/issues/4682/events | https://github.com/huggingface/datasets/issues/4682 | 1,304,788,215 | I_kwDODunzps5NxXz3 | 4,682 | weird issue/bug with columns (dataset iterable/stream mode) | {
"avatar_url": "https://avatars.githubusercontent.com/u/12104720?v=4",
"events_url": "https://api.github.com/users/eunseojo/events{/privacy}",
"followers_url": "https://api.github.com/users/eunseojo/followers",
"following_url": "https://api.github.com/users/eunseojo/following{/other_user}",
"gists_url": "https://api.github.com/users/eunseojo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/eunseojo",
"id": 12104720,
"login": "eunseojo",
"node_id": "MDQ6VXNlcjEyMTA0NzIw",
"organizations_url": "https://api.github.com/users/eunseojo/orgs",
"received_events_url": "https://api.github.com/users/eunseojo/received_events",
"repos_url": "https://api.github.com/users/eunseojo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/eunseojo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eunseojo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/eunseojo"
} | [] | open | false | null | [] | null | [] | 2022-07-14T13:26:47Z | 2022-07-14T13:26:47Z | null | CONTRIBUTOR | null | null | null | I have a dataset online (CloverSearch/cc-news-mutlilingual) that has a bunch of columns, two of which are "score_title_maintext" and "score_title_description". the original files are jsonl formatted. I was trying to iterate through via streaming mode and grab all "score_title_description" values, but I kept getting key not found after a certain point of iteration. I found that some json objects in the file don't have "score_title_description". And in SOME cases, this returns a NONE and in others it just gets a key error. Why is there an inconsistency here and how can I fix it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4682/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4682/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5720/comments | https://api.github.com/repos/huggingface/datasets/issues/5720/events | https://github.com/huggingface/datasets/issues/5720 | 1,659,610,705 | I_kwDODunzps5i66ZR | 5,720 | Streaming IterableDatasets do not work with torch DataLoaders | {
"avatar_url": "https://avatars.githubusercontent.com/u/29244648?v=4",
"events_url": "https://api.github.com/users/jlehrer1/events{/privacy}",
"followers_url": "https://api.github.com/users/jlehrer1/followers",
"following_url": "https://api.github.com/users/jlehrer1/following{/other_user}",
"gists_url": "https://api.github.com/users/jlehrer1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jlehrer1",
"id": 29244648,
"login": "jlehrer1",
"node_id": "MDQ6VXNlcjI5MjQ0NjQ4",
"organizations_url": "https://api.github.com/users/jlehrer1/orgs",
"received_events_url": "https://api.github.com/users/jlehrer1/received_events",
"repos_url": "https://api.github.com/users/jlehrer1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jlehrer1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jlehrer1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jlehrer1"
} | [] | open | false | null | [] | null | [
"Edit: This behavior is true even without `.take/.set`",
"I'm experiencing the same problem that @jlehrer1. I was able to reproduce it with a very small example:\r\n\r\n```py\r\nfrom datasets import Dataset, load_dataset, load_dataset_builder\r\nfrom torch.utils.data import DataLoader\r\n\r\n\r\ndef my_gen():\r\n... | 2023-04-08T18:45:48Z | 2023-05-27T12:57:08Z | null | NONE | null | null | null | ### Describe the bug
When using streaming datasets set up with train/val split using `.skip()` and `.take()`, the following error occurs when iterating over a torch dataloader:
```
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 363, in __iter__
self._iterator = self._get_iterator()
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 314, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 927, in __init__
w.start()
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/context.py", line 284, in _Popen
return Popen(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Users/julian/miniconda3/envs/sims/lib/python3.9/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object '_generate_examples_from_tables_wrapper.<locals>.wrapper'
```
To reproduce, run the code
```
from datasets import load_dataset
data = load_dataset(args.dataset_name, split="train", streaming=True)
train_len = 5000
val_len = 100
train, val = data.take(train_len), data.skip(train_len).take(val_len)
traindata = IterableClipDataset(data, context_length=args.max_len, tokenizer=tokenizer, image_key="url", text_key="text")
traindata = DataLoader(traindata, batch_size=args.batch_size, num_workers=args.num_workers, persistent_workers=True)
```
Where the class IterableClipDataset is a simple wrapper to cast the dataset to a torch iterabledataset, defined via
```
from torch.utils.data import Dataset, IterableDataset
from torchvision.transforms import Compose, Resize, ToTensor
from transformers import AutoTokenizer
import requests
from PIL import Image
class IterableClipDataset(IterableDataset):
def __init__(self, dataset, context_length: int, image_transform=None, tokenizer=None, image_key="image", text_key="text"):
self.dataset = dataset
self.context_length = context_length
self.image_transform = Compose([Resize((224, 224)), ToTensor()]) if image_transform is None else image_transform
self.tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") if tokenizer is None else tokenizer
self.image_key = image_key
self.text_key = text_key
def read_image(self, url: str):
try: # Try to read the image
image = Image.open(requests.get(url, stream=True).raw)
except:
image = Image.new("RGB", (224, 224), (0, 0, 0))
return image
def process_sample(self, image, text):
if isinstance(image, str):
image = self.read_image(image)
if self.image_transform is not None:
image = self.image_transform(image)
text = self.tokenizer.encode(
text, add_special_tokens=True, max_length=self.context_length, truncation=True, padding="max_length"
)
text = torch.tensor(text, dtype=torch.long)
return image, text
def __iter__(self):
for sample in self.dataset:
image, text = sample[self.image_key], sample[self.text_key]
yield self.process_sample(image, text)
```
### Steps to reproduce the bug
Steps to reproduce
1. Install `datasets`, `torch`, and `PIL` (if you want to reproduce exactly)
2. Run the code above
### Expected behavior
Batched data is produced from the dataloader
### Environment info
```
datasets == 2.9.0
python == 3.9.12
torch == 1.11.0
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5720/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/3392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3392/comments | https://api.github.com/repos/huggingface/datasets/issues/3392/events | https://github.com/huggingface/datasets/issues/3392 | 1,073,073,408 | I_kwDODunzps4_9c0A | 3,392 | Dataset viewer issue for `dansbecker/hackernews_hiring_posts` | {
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/severo",
"id": 1676121,
"login": "severo",
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"organizations_url": "https://api.github.com/users/severo/orgs",
"received_events_url": "https://api.github.com/users/severo/received_events",
"repos_url": "https://api.github.com/users/severo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/severo"
} | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | [] | null | [
"This issue was fixed by me calling `all_datasets.push_to_hub(\"hackernews_hiring_posts\")`.\r\n\r\nThe previous problems were from calling `all_datasets.save_to_disk` and then pushing with `my_repo.git_add` and `my_repo.push_to_hub`.\r\n"
] | 2021-12-07T08:41:01Z | 2021-12-07T14:04:28Z | 2021-12-07T14:04:28Z | CONTRIBUTOR | null | null | null | ## Dataset viewer issue for `dansbecker/hackernews_hiring_posts`
**Link:** https://huggingface.co/datasets/dansbecker/hackernews_hiring_posts
*short description of the issue*
Dataset preview not showing for uploaded DatasetDict. See https://discuss.huggingface.co/t/dataset-preview-not-showing-for-uploaded-datasetdict/12603
Am I the one who added this dataset ?
No -> @dansbecker | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3392/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3392/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3082 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3082/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3082/comments | https://api.github.com/repos/huggingface/datasets/issues/3082/events | https://github.com/huggingface/datasets/pull/3082 | 1,026,388,994 | PR_kwDODunzps4tM2BV | 3,082 | Fix error related to huggingface_hub timeout parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2021-10-14T13:17:47Z | 2021-10-14T14:39:52Z | 2021-10-14T14:39:51Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3082.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3082",
"merged_at": "2021-10-14T14:39:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3082.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3082"
} | The `huggingface_hub` package added the parameter `timeout` from version 0.0.19.
This PR bumps this minimal version.
Fix #3080. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3082/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3082/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4715/comments | https://api.github.com/repos/huggingface/datasets/issues/4715/events | https://github.com/huggingface/datasets/pull/4715 | 1,309,405,980 | PR_kwDODunzps47pSui | 4,715 | Fix POS tags | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"CI failures are about missing content in the dataset cards or bad tags, and this is unrelated to this PR. Merging :)"
] | 2022-07-19T11:52:54Z | 2022-07-19T12:54:34Z | 2022-07-19T12:41:16Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4715.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4715",
"merged_at": "2022-07-19T12:41:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4715.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4715"
} | We're now using `part-of-speech` and not `part-of-speech-tagging`, see discussion here: https://github.com/huggingface/datasets/commit/114c09aff2fa1519597b46fbcd5a8e0c0d3ae020#r78794777 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4715/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4715/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5377/comments | https://api.github.com/repos/huggingface/datasets/issues/5377/events | https://github.com/huggingface/datasets/pull/5377 | 1,503,477,833 | PR_kwDODunzps5Fz5lw | 5,377 | Add a parallel implementation of to_tf_dataset() | {
"avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4",
"events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}",
"followers_url": "https://api.github.com/users/Rocketknight1/followers",
"following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}",
"gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Rocketknight1",
"id": 12866554,
"login": "Rocketknight1",
"node_id": "MDQ6VXNlcjEyODY2NTU0",
"organizations_url": "https://api.github.com/users/Rocketknight1/orgs",
"received_events_url": "https://api.github.com/users/Rocketknight1/received_events",
"repos_url": "https://api.github.com/users/Rocketknight1/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Rocketknight1"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Failing because the test server uses Py3.7 but the `SharedMemory` features require Py3.8! I forgot we still support 3.7 for another couple of months. I'm not sure exactly how to proceed, whether I should leave this PR until then, or ... | 2022-12-19T19:40:27Z | 2023-01-25T16:28:44Z | 2023-01-25T16:21:40Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5377.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5377",
"merged_at": "2023-01-25T16:21:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5377.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5377"
} | Hey all! Here's a first draft of the PR to add a multiprocessing implementation for `to_tf_dataset()`. It worked in some quick testing for me, but obviously I need to do some much more rigorous testing/benchmarking, and add some proper library tests.
The core idea is that we do everything using `multiprocessing` and `numpy`, and just wrap a `tf.data.Dataset` around the output. We could also rewrite the existing single-threaded implementation based on this code, which might simplify it a bit.
Checklist:
- [X] Add initial draft
- [x] Check that it works regardless of whether the `collate_fn` or dataset returns `tf` or `np` arrays
- [x] Check that it works with `tf.string` return data
- [x] Check indices are correctly reshuffled each epoch
- [x] Make sure workers don't try to initialize a GPU device!!
- [x] Check `fit()` with multiple epochs works fine and that the progress bar is correct
- [x] Check there are no memory leaks or zombie processes
- [x] Benchmark performance
- [x] Tweak params for dataset inference - can we speed things up there a bit?
- [x] Add tests to the library
- [x] Add a PR to `transformers` to expose the `num_workers` argument via `prepare_tf_dataset` (will merge after this one is released)
- [x] Stop TF console spam!! (almost)
- [x] Add a method for creating SHM that doesn't crash if it was left and still linked
- [x] Add a barrier for Py <= 3.7 because it doesn't support SharedMemory
- [x] Support string dtypes by converting them into fixed-width character arrays | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5377/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5377/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4567/comments | https://api.github.com/repos/huggingface/datasets/issues/4567/events | https://github.com/huggingface/datasets/pull/4567 | 1,284,528,474 | PR_kwDODunzps46Wh0- | 4,567 | Add evaluation data for amazon_reviews_multi | {
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lewtun",
"id": 26859204,
"login": "lewtun",
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"repos_url": "https://api.github.com/users/lewtun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lewtun"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets."
] | 2022-06-25T09:40:52Z | 2023-09-24T09:35:22Z | 2022-09-23T09:37:23Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4567",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4567"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4567/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3507/comments | https://api.github.com/repos/huggingface/datasets/issues/3507/events | https://github.com/huggingface/datasets/issues/3507 | 1,091,214,808 | I_kwDODunzps5BCp3Y | 3,507 | Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "c5def5",
"default": fals... | closed | false | null | [] | null | [
"IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it see... | 2021-12-30T17:04:25Z | 2022-11-04T15:31:38Z | 2022-11-04T15:31:37Z | MEMBER | null | null | null | I open this PR to have a public discussion about this topic and make a decision.
As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)?
On the other hand, the dummy data is necessary for testing (in our CI suite) that the canonical dataset loads correctly. However:
- the dataset preview feature is already an indirect test that the dataset loads correctly (it also tests it is streamable though)
- we are migrating canonical datasets to the Hub
Do we really need to continue testing them in out CI?
Also note that for generating both (dataset_infos.json file and dummy data), the entire dataset needs being downloaded. This can be an issue for huge datasets (like WIT, with 400 GB of data).
Feel free to ping other people for the discussion.
CC: @lhoestq @mariosasko @thomwolf @julien-c @patrickvonplaten @anton-l @LysandreJik @yjernite @nateraw | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3507/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3507/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6263 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6263/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6263/comments | https://api.github.com/repos/huggingface/datasets/issues/6263/events | https://github.com/huggingface/datasets/issues/6263 | 1,914,951,043 | I_kwDODunzps5yI9WD | 6,263 | CI is broken: ImportError: cannot import name 'context' from 'tensorflow.python' | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2023-09-27T08:12:05Z | 2023-09-27T08:36:40Z | 2023-09-27T08:36:40Z | MEMBER | null | null | null | Python 3.10 CI is broken for `test_py310`.
See: https://github.com/huggingface/datasets/actions/runs/6322990957/job/17169678812?pr=6262
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py)
```
```
_________________________ TempSeedTest.test_tensorflow _________________________
[gw1] linux -- Python 3.10.13 /opt/hostedtoolcache/Python/3.10.13/x64/bin/python
self = <tests.test_py_utils.TempSeedTest testMethod=test_tensorflow>
@require_tf
def test_tensorflow(self):
import tensorflow as tf
from tensorflow.keras import layers
model = layers.Dense(2)
def gen_random_output():
x = tf.random.uniform((1, 3))
return model(x).numpy()
> with temp_seed(42, set_tensorflow=True):
tests/test_py_utils.py:155:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/contextlib.py:135: in __enter__
return next(self.gen)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
seed = 42, set_pytorch = False, set_tensorflow = True
@contextmanager
def temp_seed(seed: int, set_pytorch=False, set_tensorflow=False):
"""Temporarily set the random seed. This works for python numpy, pytorch and tensorflow."""
np_state = np.random.get_state()
np.random.seed(seed)
if set_pytorch and config.TORCH_AVAILABLE:
import torch
torch_state = torch.random.get_rng_state()
torch.random.manual_seed(seed)
if torch.cuda.is_available():
torch_cuda_states = torch.cuda.get_rng_state_all()
torch.cuda.manual_seed_all(seed)
if set_tensorflow and config.TF_AVAILABLE:
import tensorflow as tf
> from tensorflow.python import context as tfpycontext
E ImportError: cannot import name 'context' from 'tensorflow.python' (/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/tensorflow/python/__init__.py)
/opt/hostedtoolcache/Python/3.10.13/x64/lib/python3.10/site-packages/datasets/utils/py_utils.py:257: ImportError
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6263/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6263/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2285 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2285/comments | https://api.github.com/repos/huggingface/datasets/issues/2285/events | https://github.com/huggingface/datasets/issues/2285 | 871,005,236 | MDU6SXNzdWU4NzEwMDUyMzY= | 2,285 | Help understanding how to build a dataset for language modeling as with the old TextDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4",
"events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}",
"followers_url": "https://api.github.com/users/danieldiezmallo/followers",
"following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}",
"gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/danieldiezmallo",
"id": 46021411,
"login": "danieldiezmallo",
"node_id": "MDQ6VXNlcjQ2MDIxNDEx",
"organizations_url": "https://api.github.com/users/danieldiezmallo/orgs",
"received_events_url": "https://api.github.com/users/danieldiezmallo/received_events",
"repos_url": "https://api.github.com/users/danieldiezmallo/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions",
"type": "User",
"url": "https://api.github.com/users/danieldiezmallo"
} | [] | closed | false | null | [] | null | [
"\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for li... | 2021-04-29T13:16:45Z | 2021-05-19T07:22:45Z | 2021-05-19T07:22:39Z | NONE | null | null | null | Hello,
I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers.
I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator:
```
model_checkpoint = 'distilbert-base-uncased'
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
from transformers import TextDataset
dataset = TextDataset(
tokenizer=tokenizer,
file_path="path/to/text_file.txt",
block_size=512,
)
```
For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer:
```
import datasets
dataset = datasets.load_dataset('path/to/text_file.txt')
model_checkpoint = 'distilbert-base-uncased'
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
def tokenize_function(examples):
return tokenizer(examples["text"])
tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
tokenized_datasets
```
So what would be the "standard" way of creating a dataset in the way it was done before?
Thank you very much for the help :)) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2285/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2377/comments | https://api.github.com/repos/huggingface/datasets/issues/2377/events | https://github.com/huggingface/datasets/issues/2377 | 894,918,927 | MDU6SXNzdWU4OTQ5MTg5Mjc= | 2,377 | ArrowDataset.save_to_disk produces files that cannot be read using pyarrow.feather | {
"avatar_url": "https://avatars.githubusercontent.com/u/1829149?v=4",
"events_url": "https://api.github.com/users/Ark-kun/events{/privacy}",
"followers_url": "https://api.github.com/users/Ark-kun/followers",
"following_url": "https://api.github.com/users/Ark-kun/following{/other_user}",
"gists_url": "https://api.github.com/users/Ark-kun/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Ark-kun",
"id": 1829149,
"login": "Ark-kun",
"node_id": "MDQ6VXNlcjE4MjkxNDk=",
"organizations_url": "https://api.github.com/users/Ark-kun/orgs",
"received_events_url": "https://api.github.com/users/Ark-kun/received_events",
"repos_url": "https://api.github.com/users/Ark-kun/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Ark-kun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ark-kun/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Ark-kun"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Hi ! This is because we are actually using the arrow streaming format. We plan to switch to the arrow IPC format.\r\nMore info at #1933 ",
"Not sure if this was resolved, but I am getting a similar error when trying to load a dataset.arrow file directly: `ArrowInvalid: Not an Arrow file`",
"Since we're using t... | 2021-05-19T02:04:37Z | 2023-03-15T18:06:42Z | null | NONE | null | null | null | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
from datasets import load_dataset
from pyarrow import feather
dataset = load_dataset('imdb', split='train')
dataset.save_to_disk('dataset_dir')
table = feather.read_table('dataset_dir/dataset.arrow')
```
## Expected results
I expect that the saved dataset can be read by the official Apache Arrow methods.
## Actual results
```
File "/usr/local/lib/python3.7/site-packages/pyarrow/feather.py", line 236, in read_table
reader.open(source, use_memory_map=memory_map)
File "pyarrow/feather.pxi", line 67, in pyarrow.lib.FeatherReader.open
File "pyarrow/error.pxi", line 123, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Not a Feather V1 or Arrow IPC file
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: datasets-1.6.2
- Platform: Linux
- Python version: 3.7
- PyArrow version: 0.17.1, also 2.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2377/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2377/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1459/comments | https://api.github.com/repos/huggingface/datasets/issues/1459/events | https://github.com/huggingface/datasets/pull/1459 | 761,258,395 | MDExOlB1bGxSZXF1ZXN0NTM1OTUxMDY2 | 1,459 | Add Google Conceptual Captions Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2020-12-10T13:50:33Z | 2022-04-14T13:14:19Z | 2022-04-14T13:07:49Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1459",
"merged_at": "2022-04-14T13:07:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1459"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1459/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/2401 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2401/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2401/comments | https://api.github.com/repos/huggingface/datasets/issues/2401/events | https://github.com/huggingface/datasets/issues/2401 | 899,910,521 | MDU6SXNzdWU4OTk5MTA1MjE= | 2,401 | load_dataset('natural_questions') fails with "ValueError: External features info don't match the dataset" | {
"avatar_url": "https://avatars.githubusercontent.com/u/15602718?v=4",
"events_url": "https://api.github.com/users/jonrbates/events{/privacy}",
"followers_url": "https://api.github.com/users/jonrbates/followers",
"following_url": "https://api.github.com/users/jonrbates/following{/other_user}",
"gists_url": "https://api.github.com/users/jonrbates/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jonrbates",
"id": 15602718,
"login": "jonrbates",
"node_id": "MDQ6VXNlcjE1NjAyNzE4",
"organizations_url": "https://api.github.com/users/jonrbates/orgs",
"received_events_url": "https://api.github.com/users/jonrbates/received_events",
"repos_url": "https://api.github.com/users/jonrbates/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jonrbates/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jonrbates/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jonrbates"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"I faced the similar problem. Downgrading datasets to 1.5.0 fixed it.",
"Thanks for reporting, I'm looking into it",
"I just opened #2438 to fix this :)",
"Hi ! This has been fixed in the 1.8.0 release of `datasets`"
] | 2021-05-24T18:38:53Z | 2021-06-09T09:07:25Z | 2021-06-09T09:07:25Z | NONE | null | null | null | ## Describe the bug
load_dataset('natural_questions') throws ValueError
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset('natural_questions', split='validation[:10]')
```
## Expected results
Call to load_dataset returns data.
## Actual results
```
Using custom data configuration default
Reusing dataset natural_questions (/mnt/d/huggingface/datasets/natural_questions/default/0.0.2/19bc04755018a3ad02ee74f7045cde4ba9b4162cb64450a87030ab786b123b76)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-d55ab8a8cc1c> in <module>
----> 1 datasets = load_dataset('natural_questions', split='validation[:10]', cache_dir='/mnt/d/huggingface/datasets')
~/miniconda3/lib/python3.8/site-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs)
756 keep_in_memory if keep_in_memory is not None else is_small_dataset(builder_instance.info.dataset_size)
757 )
--> 758 ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
759 if save_infos:
760 builder_instance._save_infos()
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in as_dataset(self, split, run_post_process, ignore_verifications, in_memory)
735
736 # Create a dataset for each of the given splits
--> 737 datasets = utils.map_nested(
738 partial(
739 self._build_single_dataset,
~/miniconda3/lib/python3.8/site-packages/datasets/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_list, map_tuple, map_numpy, num_proc, types)
193 # Singleton
194 if not isinstance(data_struct, dict) and not isinstance(data_struct, types):
--> 195 return function(data_struct)
196
197 disable_tqdm = bool(logger.getEffectiveLevel() > INFO)
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _build_single_dataset(self, split, run_post_process, ignore_verifications, in_memory)
762
763 # Build base dataset
--> 764 ds = self._as_dataset(
765 split=split,
766 in_memory=in_memory,
~/miniconda3/lib/python3.8/site-packages/datasets/builder.py in _as_dataset(self, split, in_memory)
838 in_memory=in_memory,
839 )
--> 840 return Dataset(**dataset_kwargs)
841
842 def _post_process(self, dataset: Dataset, resources_paths: Dict[str, str]) -> Optional[Dataset]:
~/miniconda3/lib/python3.8/site-packages/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
271 assert self._fingerprint is not None, "Fingerprint can't be None in a Dataset object"
272 if self.info.features.type != inferred_features.type:
--> 273 raise ValueError(
274 "External features info don't match the dataset:\nGot\n{}\nwith type\n{}\n\nbut expected something like\n{}\nwith type\n{}".format(
275 self.info.features, self.info.features.type, inferred_features, inferred_features.type
ValueError: External features info don't match the dataset:
Got
{'id': Value(dtype='string', id=None), 'document': {'title': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None), 'html': Value(dtype='string', id=None), 'tokens': Sequence(feature={'token': Value(dtype='string', id=None), 'is_html': Value(dtype='bool', id=None)}, length=-1, id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': Sequence(feature={'id': Value(dtype='string', id=None), 'long_answer': {'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None)}, 'short_answers': Sequence(feature={'start_token': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'end_byte': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}, length=-1, id=None), 'yes_no_answer': ClassLabel(num_classes=2, names=['NO', 'YES'], names_file=None, id=None)}, length=-1, id=None)}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<start_token: int64, end_token: int64, start_byte: int64, end_byte: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<title: string, url: string, html: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>>, id: string, question: struct<text: string, tokens: list<item: string>>>
but expected something like
{'id': Value(dtype='string', id=None), 'document': {'html': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'tokens': {'is_html': Sequence(feature=Value(dtype='bool', id=None), length=-1, id=None), 'token': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'url': Value(dtype='string', id=None)}, 'question': {'text': Value(dtype='string', id=None), 'tokens': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'annotations': {'id': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'long_answer': [{'end_byte': Value(dtype='int64', id=None), 'end_token': Value(dtype='int64', id=None), 'start_byte': Value(dtype='int64', id=None), 'start_token': Value(dtype='int64', id=None)}], 'short_answers': [{'end_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'end_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_byte': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'start_token': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None), 'text': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}], 'yes_no_answer': Sequence(feature=Value(dtype='int64', id=None), length=-1, id=None)}}
with type
struct<annotations: struct<id: list<item: string>, long_answer: list<item: struct<end_byte: int64, end_token: int64, start_byte: int64, start_token: int64>>, short_answers: list<item: struct<end_byte: list<item: int64>, end_token: list<item: int64>, start_byte: list<item: int64>, start_token: list<item: int64>, text: list<item: string>>>, yes_no_answer: list<item: int64>>, document: struct<html: string, title: string, tokens: struct<is_html: list<item: bool>, token: list<item: string>>, url: string>, id: string, question: struct<text: string, tokens: list<item: string>>>
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.6.2
- Platform: Linux-5.4.72-microsoft-standard-WSL2-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2401/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2401/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3134/comments | https://api.github.com/repos/huggingface/datasets/issues/3134/events | https://github.com/huggingface/datasets/issues/3134 | 1,033,251,755 | I_kwDODunzps49liur | 3,134 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py | {
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yananchen1989/events{/privacy}",
"followers_url": "https://api.github.com/users/yananchen1989/followers",
"following_url": "https://api.github.com/users/yananchen1989/following{/other_user}",
"gists_url": "https://api.github.com/users/yananchen1989/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/yananchen1989",
"id": 26405281,
"login": "yananchen1989",
"node_id": "MDQ6VXNlcjI2NDA1Mjgx",
"organizations_url": "https://api.github.com/users/yananchen1989/orgs",
"received_events_url": "https://api.github.com/users/yananchen1989/received_events",
"repos_url": "https://api.github.com/users/yananchen1989/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/yananchen1989/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yananchen1989/subscriptions",
"type": "User",
"url": "https://api.github.com/users/yananchen1989"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | [] | null | [
"Hi,\r\n\r\nDid you try to run the code multiple times (GitHub URLs can be down sometimes for various reasons)? I can access `https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py`, so this code is working without an error on my side. \r\n\r\nAdditionally, can you please run the `data... | 2021-10-22T07:07:52Z | 2023-09-14T01:19:45Z | 2022-01-19T14:02:31Z | NONE | null | null | null | datasets version: 1.12.1
`metric = datasets.load_metric('rouge')`
The error:
> ConnectionError Traceback (most recent call last)
> <ipython-input-3-dd10a0c5212f> in <module>
> ----> 1 metric = datasets.load_metric('rouge')
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs)
> 613 download_config=download_config,
> 614 download_mode=download_mode,
> --> 615 dataset=False,
> 616 )
> 617 metric_cls = import_main_class(module_path, dataset=False)
>
> /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, dynamic_modules_path, return_resolved_file_path, **download_kwargs)
> 328 file_path = hf_github_url(path=path, name=name, dataset=dataset, version=script_version)
> 329 try:
> --> 330 local_path = cached_path(file_path, download_config=download_config)
> 331 except FileNotFoundError:
> 332 if script_version is not None:
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs)
> 296 use_etag=download_config.use_etag,
> 297 max_retries=download_config.max_retries,
> --> 298 use_auth_token=download_config.use_auth_token,
> 299 )
> 300 elif os.path.exists(url_or_filename):
>
> /usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
> 603 raise FileNotFoundError("Couldn't find file at {}".format(url))
> 604 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
> --> 605 raise ConnectionError("Couldn't reach {}".format(url))
> 606
> 607 # Try a second time
>
> ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.11.0/metrics/rouge/rouge.py
Is there any remedy to solve the connection issue ? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3134/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3134/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5621 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5621/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5621/comments | https://api.github.com/repos/huggingface/datasets/issues/5621/events | https://github.com/huggingface/datasets/pull/5621 | 1,615,029,615 | PR_kwDODunzps5LjwD8 | 5,621 | Adding Oracle Cloud to docs | {
"avatar_url": "https://avatars.githubusercontent.com/u/29129502?v=4",
"events_url": "https://api.github.com/users/ahosler/events{/privacy}",
"followers_url": "https://api.github.com/users/ahosler/followers",
"following_url": "https://api.github.com/users/ahosler/following{/other_user}",
"gists_url": "https://api.github.com/users/ahosler/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/ahosler",
"id": 29129502,
"login": "ahosler",
"node_id": "MDQ6VXNlcjI5MTI5NTAy",
"organizations_url": "https://api.github.com/users/ahosler/orgs",
"received_events_url": "https://api.github.com/users/ahosler/received_events",
"repos_url": "https://api.github.com/users/ahosler/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/ahosler/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ahosler/subscriptions",
"type": "User",
"url": "https://api.github.com/users/ahosler"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-08T10:22:50Z | 2023-03-11T00:57:18Z | 2023-03-11T00:49:56Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5621.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5621",
"merged_at": "2023-03-11T00:49:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5621.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5621"
} | Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5621/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5621/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3286 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3286/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3286/comments | https://api.github.com/repos/huggingface/datasets/issues/3286/events | https://github.com/huggingface/datasets/pull/3286 | 1,056,008,586 | PR_kwDODunzps4updTK | 3,286 | Fix build_docs CI | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [] | 2021-11-17T11:18:56Z | 2021-11-17T11:19:20Z | 2021-11-17T11:19:19Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3286.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3286",
"merged_at": "2021-11-17T11:19:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3286.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3286"
} | Because of https://github.com/Python-Markdown/markdown/issues/1196 we have to temporarily pin `markdown` to 3.3.4 for the docs to build without issues | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3286/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3286/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3902/comments | https://api.github.com/repos/huggingface/datasets/issues/3902/events | https://github.com/huggingface/datasets/issues/3902 | 1,167,403,377 | I_kwDODunzps5FlSlx | 3,902 | Can't import datasets: partially initialized module 'fsspec' has no attribute 'utils' | {
"avatar_url": "https://avatars.githubusercontent.com/u/3166852?v=4",
"events_url": "https://api.github.com/users/arunasank/events{/privacy}",
"followers_url": "https://api.github.com/users/arunasank/followers",
"following_url": "https://api.github.com/users/arunasank/following{/other_user}",
"gists_url": "https://api.github.com/users/arunasank/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/arunasank",
"id": 3166852,
"login": "arunasank",
"node_id": "MDQ6VXNlcjMxNjY4NTI=",
"organizations_url": "https://api.github.com/users/arunasank/orgs",
"received_events_url": "https://api.github.com/users/arunasank/received_events",
"repos_url": "https://api.github.com/users/arunasank/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/arunasank/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arunasank/subscriptions",
"type": "User",
"url": "https://api.github.com/users/arunasank"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [
"Update: `\"python3 -c \"from from datasets import Dataset, DatasetDict\"` works, but not if I import without the `python3 -c`",
"Hi @arunasank, thanks for reporting.\r\n\r\nIt seems that this can be caused because you are using an old version of `fsspec`: the reason why it works if you run `python3` seems to be ... | 2022-03-12T21:22:03Z | 2023-02-09T14:53:49Z | 2022-03-22T07:10:41Z | NONE | null | null | null | ## Describe the bug
Unable to import datasets
## Steps to reproduce the bug
```python
from datasets import Dataset, DatasetDict
```
## Expected results
The import works without errors
## Actual results
```
AttributeError Traceback (most recent call last)
<ipython-input-37-c8cfcbe62127> in <module>
11 # from tqdm import tqdm
12 # import torch
---> 13 from datasets import Dataset
14 # from transformers import Trainer, TrainingArguments, AutoModel, AutoTokenizer, AutoModelForMaskedLM, DataCollatorForLanguageModeling
15 # from sentence_transformers import SentenceTransformer
~/.local/lib/python3.8/site-packages/datasets/__init__.py in <module>
31 )
32
---> 33 from .arrow_dataset import Dataset, concatenate_datasets
34 from .arrow_reader import ArrowReader, ReadInstruction
35 from .arrow_writer import ArrowWriter
~/.local/lib/python3.8/site-packages/datasets/arrow_dataset.py in <module>
46 )
47
---> 48 import fsspec
49 import numpy as np
50 import pandas as pd
~/.local/lib/python3.8/site-packages/fsspec/__init__.py in <module>
10 from . import _version, caching
11 from .callbacks import Callback
---> 12 from .core import get_fs_token_paths, open, open_files, open_local
13 from .exceptions import FSTimeoutError
14 from .mapping import FSMap, get_mapper
~/.local/lib/python3.8/site-packages/fsspec/core.py in <module>
16 caches,
17 )
---> 18 from .compression import compr
19 from .registry import filesystem, get_filesystem_class
20 from .utils import (
~/.local/lib/python3.8/site-packages/fsspec/compression.py in <module>
68
69
---> 70 register_compression("zip", unzip, "zip")
71 register_compression("bz2", BZ2File, "bz2")
72
~/.local/lib/python3.8/site-packages/fsspec/compression.py in register_compression(name, callback, extensions, force)
44
45 for ext in extensions:
---> 46 if ext in fsspec.utils.compressions and not force:
47 raise ValueError(
48 "Duplicate compression file extension: %s (%s)" % (ext, name)
AttributeError: partially initialized module 'fsspec' has no attribute 'utils' (most likely due to a circular import)
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: Jupyter notebook
- Python version: 3.8.10
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3902/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3902/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6170/comments | https://api.github.com/repos/huggingface/datasets/issues/6170/events | https://github.com/huggingface/datasets/pull/6170 | 1,862,705,731 | PR_kwDODunzps5YkJOV | 6,170 | feat: Return the name of the currently loaded file | {
"avatar_url": "https://avatars.githubusercontent.com/u/124021133?v=4",
"events_url": "https://api.github.com/users/Amitesh-Patel/events{/privacy}",
"followers_url": "https://api.github.com/users/Amitesh-Patel/followers",
"following_url": "https://api.github.com/users/Amitesh-Patel/following{/other_user}",
"gists_url": "https://api.github.com/users/Amitesh-Patel/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Amitesh-Patel",
"id": 124021133,
"login": "Amitesh-Patel",
"node_id": "U_kgDOB2RpjQ",
"organizations_url": "https://api.github.com/users/Amitesh-Patel/orgs",
"received_events_url": "https://api.github.com/users/Amitesh-Patel/received_events",
"repos_url": "https://api.github.com/users/Amitesh-Patel/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Amitesh-Patel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Amitesh-Patel/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Amitesh-Patel"
} | [] | open | false | null | [] | null | [
"Your change adds a new element in the key used to avoid duplicates when generating the examples of a dataset. I don't think it fixes the issue you're trying to solve."
] | 2023-08-23T07:08:17Z | 2023-08-29T12:41:05Z | null | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6170.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6170",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6170.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6170"
} | Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/json/json.py#L92.
fixes #5806 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6170/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6170/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5189/comments | https://api.github.com/repos/huggingface/datasets/issues/5189/events | https://github.com/huggingface/datasets/issues/5189 | 1,432,769,143 | I_kwDODunzps5VZlJ3 | 5,189 | Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded | {
"avatar_url": "https://avatars.githubusercontent.com/u/53175384?v=4",
"events_url": "https://api.github.com/users/merveenoyan/events{/privacy}",
"followers_url": "https://api.github.com/users/merveenoyan/followers",
"following_url": "https://api.github.com/users/merveenoyan/following{/other_user}",
"gists_url": "https://api.github.com/users/merveenoyan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/merveenoyan",
"id": 53175384,
"login": "merveenoyan",
"node_id": "MDQ6VXNlcjUzMTc1Mzg0",
"organizations_url": "https://api.github.com/users/merveenoyan/orgs",
"received_events_url": "https://api.github.com/users/merveenoyan/received_events",
"repos_url": "https://api.github.com/users/merveenoyan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/merveenoyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/merveenoyan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/merveenoyan"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"I have to admit I'm not a fan of this idea, as this would result in a non-consistent behavior between tabular and non-tabular datasets, which is confusing if done without the context you provided. Instead, we could consider returning a `Dataset` object rather than `DatasetDict` if there is only one split in the ge... | 2022-11-02T09:15:02Z | 2022-12-06T12:13:17Z | null | CONTRIBUTOR | null | null | null | ### Feature request
Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark)
```python
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
print(next(iter(dataset["train"])))
```
`datasets` library is essentially designed for people who'd like to use benchmark datasets on various modalities to fine-tune their models, and these benchmark datasets usually have pre-defined train and test splits. However, for tabular workflows, having train and test splits usually ends up model overfitting to validation split so usually the users would like to do validation techniques like `StratifiedKFoldCrossValidation` or when they tune for hyperparameters they do `GridSearchCrossValidation` so often the behavior is to create their own splits. Even [in this paper](https://hal.archives-ouvertes.fr/hal-03723551) a benchmark is introduced but the split is done by authors.
It's a bit confusing for average tabular user to try and load a dataset and see `"train"` so it would be nice if we would not load dataset into a split called `train `by default.
```diff
from datasets import load_dataset
dataset = load_dataset("inria-soda/tabular-benchmark", data_files=["reg_cat/house_sales.csv"], streaming=True)
-print(next(iter(dataset["train"])))
+print(next(iter(dataset)))
```
### Motivation
I explained it above 😅
### Your contribution
I think this is quite a big change that seems small (e.g. how to determine datasets that will not be load to train split?), it's best if we discuss first! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5189/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5438/comments | https://api.github.com/repos/huggingface/datasets/issues/5438/events | https://github.com/huggingface/datasets/pull/5438 | 1,537,489,730 | PR_kwDODunzps5HmWA8 | 5,438 | Update actions/checkout in CD Conda release | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-01-18T06:53:15Z | 2023-01-18T13:49:51Z | 2023-01-18T13:42:49Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5438.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5438",
"merged_at": "2023-01-18T13:42:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5438.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5438"
} | This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5438/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1515/comments | https://api.github.com/repos/huggingface/datasets/issues/1515/events | https://github.com/huggingface/datasets/pull/1515 | 764,022,753 | MDExOlB1bGxSZXF1ZXN0NTM4Mjg3NDc0 | 1,515 | Add yoruba text | {
"avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4",
"events_url": "https://api.github.com/users/dadelani/events{/privacy}",
"followers_url": "https://api.github.com/users/dadelani/followers",
"following_url": "https://api.github.com/users/dadelani/following{/other_user}",
"gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dadelani",
"id": 23586676,
"login": "dadelani",
"node_id": "MDQ6VXNlcjIzNTg2Njc2",
"organizations_url": "https://api.github.com/users/dadelani/orgs",
"received_events_url": "https://api.github.com/users/dadelani/received_events",
"repos_url": "https://api.github.com/users/dadelani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dadelani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dadelani"
} | [] | closed | false | null | [] | null | [
"closing since #1379 got merged"
] | 2020-12-12T16:29:30Z | 2020-12-13T18:37:58Z | 2020-12-13T18:37:58Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1515.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1515",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1515.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1515"
} | Adding Yoruba text C3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1515/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1515/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4158/comments | https://api.github.com/repos/huggingface/datasets/issues/4158/events | https://github.com/huggingface/datasets/pull/4158 | 1,202,376,843 | PR_kwDODunzps42ITg3 | 4,158 | Add AUC ROC Metric | {
"avatar_url": "https://avatars.githubusercontent.com/u/27527747?v=4",
"events_url": "https://api.github.com/users/emibaylor/events{/privacy}",
"followers_url": "https://api.github.com/users/emibaylor/followers",
"following_url": "https://api.github.com/users/emibaylor/following{/other_user}",
"gists_url": "https://api.github.com/users/emibaylor/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/emibaylor",
"id": 27527747,
"login": "emibaylor",
"node_id": "MDQ6VXNlcjI3NTI3NzQ3",
"organizations_url": "https://api.github.com/users/emibaylor/orgs",
"received_events_url": "https://api.github.com/users/emibaylor/received_events",
"repos_url": "https://api.github.com/users/emibaylor/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/emibaylor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/emibaylor/subscriptions",
"type": "User",
"url": "https://api.github.com/users/emibaylor"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-04-12T20:53:28Z | 2022-04-26T19:41:50Z | 2022-04-26T19:35:22Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4158",
"merged_at": "2022-04-26T19:35:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4158"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4158/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-04-25T13:57:26Z | 2023-04-26T13:43:08Z | 2023-04-26T13:35:47Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"merged_at": "2023-04-26T13:35:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790"
} | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1662/comments | https://api.github.com/repos/huggingface/datasets/issues/1662/events | https://github.com/huggingface/datasets/issues/1662 | 775,890,154 | MDU6SXNzdWU3NzU4OTAxNTQ= | 1,662 | Arrow file is too large when saving vector data | {
"avatar_url": "https://avatars.githubusercontent.com/u/22360336?v=4",
"events_url": "https://api.github.com/users/weiwangthu/events{/privacy}",
"followers_url": "https://api.github.com/users/weiwangthu/followers",
"following_url": "https://api.github.com/users/weiwangthu/following{/other_user}",
"gists_url": "https://api.github.com/users/weiwangthu/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/weiwangthu",
"id": 22360336,
"login": "weiwangthu",
"node_id": "MDQ6VXNlcjIyMzYwMzM2",
"organizations_url": "https://api.github.com/users/weiwangthu/orgs",
"received_events_url": "https://api.github.com/users/weiwangthu/received_events",
"repos_url": "https://api.github.com/users/weiwangthu/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/weiwangthu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/weiwangthu/subscriptions",
"type": "User",
"url": "https://api.github.com/users/weiwangthu"
} | [] | closed | false | null | [] | null | [
"Hi !\r\nThe arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is\r\n\r\n20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB\r\n\r\nIf you want to reduce the size you can consider using quantization for example, or maybe using dimensi... | 2020-12-29T13:23:12Z | 2021-01-21T14:12:39Z | 2021-01-21T14:12:39Z | NONE | null | null | null | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1662/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1662/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2436/comments | https://api.github.com/repos/huggingface/datasets/issues/2436/events | https://github.com/huggingface/datasets/pull/2436 | 908,100,211 | MDExOlB1bGxSZXF1ZXN0NjU4ODQzMzQy | 2,436 | Update DatasetMetadata and ReadMe | {
"avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4",
"events_url": "https://api.github.com/users/gchhablani/events{/privacy}",
"followers_url": "https://api.github.com/users/gchhablani/followers",
"following_url": "https://api.github.com/users/gchhablani/following{/other_user}",
"gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gchhablani",
"id": 29076344,
"login": "gchhablani",
"node_id": "MDQ6VXNlcjI5MDc2MzQ0",
"organizations_url": "https://api.github.com/users/gchhablani/orgs",
"received_events_url": "https://api.github.com/users/gchhablani/received_events",
"repos_url": "https://api.github.com/users/gchhablani/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gchhablani"
} | [] | closed | false | null | [] | null | [] | 2021-06-01T09:32:37Z | 2021-06-14T13:23:27Z | 2021-06-14T13:23:26Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2436",
"merged_at": "2021-06-14T13:23:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2436"
} | This PR contains the changes discussed in #2395.
**Edit**:
In addition to those changes, I'll be updating the `ReadMe` as follows:
Currently, `Section` has separate parsing and validation error lists. In `.validate()`, we add these lists to the final lists and throw errors.
One way to make `ReadMe` consistent with `DatasetMetadata` and add a separate `.validate()` method is to throw separate parsing and validation errors.
This way, we don't have to throw validation errors, but only parsing errors in `__init__ ()`. We can have an option in `__init__()` to suppress parsing errors so that an object is created for validation. Doing this will allow the user to get all the errors in one go.
In `test_dataset_cards` , we are already catching error messages and appending to a list. This can be done for `ReadMe()` for parsing errors, and `ReadMe(...,suppress_errors=True); readme.validate()` for validation, separately.
**Edit 2**:
The only parsing issue we have as of now is multiple headings at the same level with the same name. I assume this will happen very rarely, but it is still better to throw an error than silently pick one of them. It should be okay to separate it this way.
Wdyt @lhoestq ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2436/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1231/comments | https://api.github.com/repos/huggingface/datasets/issues/1231/events | https://github.com/huggingface/datasets/pull/1231 | 758,121,398 | MDExOlB1bGxSZXF1ZXN0NTMzMzQzMzAz | 1,231 | Add Urdu Sentiment Corpus (USC) | {
"avatar_url": "https://avatars.githubusercontent.com/u/44389205?v=4",
"events_url": "https://api.github.com/users/chaitnayabasava/events{/privacy}",
"followers_url": "https://api.github.com/users/chaitnayabasava/followers",
"following_url": "https://api.github.com/users/chaitnayabasava/following{/other_user}",
"gists_url": "https://api.github.com/users/chaitnayabasava/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chaitnayabasava",
"id": 44389205,
"login": "chaitnayabasava",
"node_id": "MDQ6VXNlcjQ0Mzg5MjA1",
"organizations_url": "https://api.github.com/users/chaitnayabasava/orgs",
"received_events_url": "https://api.github.com/users/chaitnayabasava/received_events",
"repos_url": "https://api.github.com/users/chaitnayabasava/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chaitnayabasava/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chaitnayabasava/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chaitnayabasava"
} | [] | closed | false | null | [] | null | [] | 2020-12-07T03:25:20Z | 2020-12-07T18:05:16Z | 2020-12-07T16:43:23Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1231.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1231",
"merged_at": "2020-12-07T16:43:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1231.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1231"
} | @lhoestq opened a clean PR containing only relevant files.
old PR #1140 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1231/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1231/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2292 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2292/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2292/comments | https://api.github.com/repos/huggingface/datasets/issues/2292/events | https://github.com/huggingface/datasets/pull/2292 | 871,230,183 | MDExOlB1bGxSZXF1ZXN0NjI2MjgzNTYy | 2,292 | Fixed typo seperate->separate | {
"avatar_url": "https://avatars.githubusercontent.com/u/32505743?v=4",
"events_url": "https://api.github.com/users/laksh9950/events{/privacy}",
"followers_url": "https://api.github.com/users/laksh9950/followers",
"following_url": "https://api.github.com/users/laksh9950/following{/other_user}",
"gists_url": "https://api.github.com/users/laksh9950/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/laksh9950",
"id": 32505743,
"login": "laksh9950",
"node_id": "MDQ6VXNlcjMyNTA1NzQz",
"organizations_url": "https://api.github.com/users/laksh9950/orgs",
"received_events_url": "https://api.github.com/users/laksh9950/received_events",
"repos_url": "https://api.github.com/users/laksh9950/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/laksh9950/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/laksh9950/subscriptions",
"type": "User",
"url": "https://api.github.com/users/laksh9950"
} | [] | closed | false | null | [] | null | [] | 2021-04-29T16:40:53Z | 2021-04-30T13:29:18Z | 2021-04-30T13:03:12Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2292.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2292",
"merged_at": "2021-04-30T13:03:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2292.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2292"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2292/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2292/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/4630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4630/comments | https://api.github.com/repos/huggingface/datasets/issues/4630/events | https://github.com/huggingface/datasets/pull/4630 | 1,293,470,728 | PR_kwDODunzps460HFM | 4,630 | fix(dataset_wrappers): Fixes access to fsspec.asyn in torch_iterable_dataset.py. | {
"avatar_url": "https://avatars.githubusercontent.com/u/4120639?v=4",
"events_url": "https://api.github.com/users/gugarosa/events{/privacy}",
"followers_url": "https://api.github.com/users/gugarosa/followers",
"following_url": "https://api.github.com/users/gugarosa/following{/other_user}",
"gists_url": "https://api.github.com/users/gugarosa/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/gugarosa",
"id": 4120639,
"login": "gugarosa",
"node_id": "MDQ6VXNlcjQxMjA2Mzk=",
"organizations_url": "https://api.github.com/users/gugarosa/orgs",
"received_events_url": "https://api.github.com/users/gugarosa/received_events",
"repos_url": "https://api.github.com/users/gugarosa/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/gugarosa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gugarosa/subscriptions",
"type": "User",
"url": "https://api.github.com/users/gugarosa"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-07-04T18:26:55Z | 2022-07-05T15:19:52Z | 2022-07-05T15:08:21Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4630.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4630",
"merged_at": "2022-07-05T15:08:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4630.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4630"
} | Fix #4612.
Apparently, newest `fsspec` versions do not allow access to attribute-based modules if they are not imported, such as `fsspec.async`.
Thus, @mariosasko suggested to add the missing part to the module import to allow for its access. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4630/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4630/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1461 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1461/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1461/comments | https://api.github.com/repos/huggingface/datasets/issues/1461/events | https://github.com/huggingface/datasets/pull/1461 | 761,415,420 | MDExOlB1bGxSZXF1ZXN0NTM2MDgzODY5 | 1,461 | Adding NewsQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/18527321?v=4",
"events_url": "https://api.github.com/users/rsanjaykamath/events{/privacy}",
"followers_url": "https://api.github.com/users/rsanjaykamath/followers",
"following_url": "https://api.github.com/users/rsanjaykamath/following{/other_user}",
"gists_url": "https://api.github.com/users/rsanjaykamath/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rsanjaykamath",
"id": 18527321,
"login": "rsanjaykamath",
"node_id": "MDQ6VXNlcjE4NTI3MzIx",
"organizations_url": "https://api.github.com/users/rsanjaykamath/orgs",
"received_events_url": "https://api.github.com/users/rsanjaykamath/received_events",
"repos_url": "https://api.github.com/users/rsanjaykamath/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rsanjaykamath/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rsanjaykamath/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rsanjaykamath"
} | [] | closed | false | null | [] | null | [
"Generate the dummy dataset then regenerate the dataset_info.json file, ",
"> Generate the dummy dataset then regenerate the dataset_info.json file,\r\n\r\nThe pytest scripts do not accept manual directory inputs for the data provided manually. This is why the tests fail. ",
"don't use the --auto-generate argum... | 2020-12-10T17:01:10Z | 2020-12-17T18:29:03Z | 2020-12-17T18:27:36Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1461.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1461",
"merged_at": "2020-12-17T18:27:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1461.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1461"
} | Since the dataset has legal restrictions to circulate the original data. It has to be manually downloaded by the user and loaded to the library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1461/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1461/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5658/comments | https://api.github.com/repos/huggingface/datasets/issues/5658/events | https://github.com/huggingface/datasets/pull/5658 | 1,634,867,204 | PR_kwDODunzps5MmJe0 | 5,658 | docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/78612354?v=4",
"events_url": "https://api.github.com/users/connor-henderson/events{/privacy}",
"followers_url": "https://api.github.com/users/connor-henderson/followers",
"following_url": "https://api.github.com/users/connor-henderson/following{/other_user}",
"gists_url": "https://api.github.com/users/connor-henderson/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/connor-henderson",
"id": 78612354,
"login": "connor-henderson",
"node_id": "MDQ6VXNlcjc4NjEyMzU0",
"organizations_url": "https://api.github.com/users/connor-henderson/orgs",
"received_events_url": "https://api.github.com/users/connor-henderson/received_events",
"repos_url": "https://api.github.com/users/connor-henderson/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/connor-henderson/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/connor-henderson/subscriptions",
"type": "User",
"url": "https://api.github.com/users/connor-henderson"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-03-22T00:12:18Z | 2023-03-24T16:43:34Z | 2023-03-24T16:36:21Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5658.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5658",
"merged_at": "2023-03-24T16:36:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5658.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5658"
} | Closes #5653
@mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5658/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5698/comments | https://api.github.com/repos/huggingface/datasets/issues/5698/events | https://github.com/huggingface/datasets/issues/5698 | 1,652,183,611 | I_kwDODunzps5ielI7 | 5,698 | Add Qdrant as another search index | {
"avatar_url": "https://avatars.githubusercontent.com/u/2649301?v=4",
"events_url": "https://api.github.com/users/kacperlukawski/events{/privacy}",
"followers_url": "https://api.github.com/users/kacperlukawski/followers",
"following_url": "https://api.github.com/users/kacperlukawski/following{/other_user}",
"gists_url": "https://api.github.com/users/kacperlukawski/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/kacperlukawski",
"id": 2649301,
"login": "kacperlukawski",
"node_id": "MDQ6VXNlcjI2NDkzMDE=",
"organizations_url": "https://api.github.com/users/kacperlukawski/orgs",
"received_events_url": "https://api.github.com/users/kacperlukawski/received_events",
"repos_url": "https://api.github.com/users/kacperlukawski/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/kacperlukawski/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kacperlukawski/subscriptions",
"type": "User",
"url": "https://api.github.com/users/kacperlukawski"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | null | [
"@mariosasko I'd appreciate your feedback on this. "
] | 2023-04-03T14:25:19Z | 2023-04-11T10:28:40Z | null | CONTRIBUTOR | null | null | null | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset.
### Your contribution
I can provide a PR implementing that functionality on my own. | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5698/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2926/comments | https://api.github.com/repos/huggingface/datasets/issues/2926/events | https://github.com/huggingface/datasets/issues/2926 | 997,463,277 | I_kwDODunzps47dBTt | 2,926 | Error when downloading datasets to non-traditional cache directories | {
"avatar_url": "https://avatars.githubusercontent.com/u/45885627?v=4",
"events_url": "https://api.github.com/users/dar-tau/events{/privacy}",
"followers_url": "https://api.github.com/users/dar-tau/followers",
"following_url": "https://api.github.com/users/dar-tau/following{/other_user}",
"gists_url": "https://api.github.com/users/dar-tau/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/dar-tau",
"id": 45885627,
"login": "dar-tau",
"node_id": "MDQ6VXNlcjQ1ODg1NjI3",
"organizations_url": "https://api.github.com/users/dar-tau/orgs",
"received_events_url": "https://api.github.com/users/dar-tau/received_events",
"repos_url": "https://api.github.com/users/dar-tau/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/dar-tau/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dar-tau/subscriptions",
"type": "User",
"url": "https://api.github.com/users/dar-tau"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"Same here !"
] | 2021-09-15T19:59:46Z | 2021-11-24T21:42:31Z | null | NONE | null | null | null | ## Describe the bug
When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails.
## Steps to reproduce the bug
```bash
ln -s /path/to/netapp/.cache ~/.cache
```
```python
load_dataset("imdb")
```
## Expected results
Successfully loading IMDB dataset
## Actual results
```
datasets.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33432835,
num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=0, num_examples=0,
dataset_name='imdb')}, {'expected': SplitInfo(name='test', num_bytes=32650697, num_examples=25000, dataset_name='imdb'),
'recorded': SplitInfo(name='test', num_bytes=659932, num_examples=503, dataset_name='imdb')}, {'expected':
SplitInfo(name='unsupervised', num_bytes=67106814, num_examples=50000, dataset_name='imdb'), 'recorded':
SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.1.2
- Platform: Ubuntu
- Python version: 3.8
## Extra notes
Stranger yet, trying to debug the phenomenon, I found the range of results to vary a lot without clear direction:
- With `cache_dir="/path/to/netapp/.cache"` the same thing happens.
- However, when linking `~/netapp/` to `/path/to/netapp` *and* setting `cache_dir="~/netapp/.cache/huggingface/datasets"` - it does work
- On the other hand, when linking `~/.cache` to `~/netapp/.cache` without using `cache_dir`, it does work anymore.
While I could test it only for a NetApp device, it might have to do with any other mounted FS.
Thanks :)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2926/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2926/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1441 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1441/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1441/comments | https://api.github.com/repos/huggingface/datasets/issues/1441/events | https://github.com/huggingface/datasets/pull/1441 | 761,021,823 | MDExOlB1bGxSZXF1ZXN0NTM1NzUzMjI5 | 1,441 | Add Igbo-English Machine Translation Dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4",
"events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}",
"followers_url": "https://api.github.com/users/abhishekkrthakur/followers",
"following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}",
"gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/abhishekkrthakur",
"id": 1183441,
"login": "abhishekkrthakur",
"node_id": "MDQ6VXNlcjExODM0NDE=",
"organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs",
"received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events",
"repos_url": "https://api.github.com/users/abhishekkrthakur/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions",
"type": "User",
"url": "https://api.github.com/users/abhishekkrthakur"
} | [] | closed | false | null | [] | null | [] | 2020-12-10T08:25:34Z | 2020-12-11T15:54:53Z | 2020-12-11T15:54:52Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1441.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1441",
"merged_at": "2020-12-11T15:54:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1441.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1441"
} | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1441/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1441/timeline | null | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/4990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4990/comments | https://api.github.com/repos/huggingface/datasets/issues/4990/events | https://github.com/huggingface/datasets/issues/4990 | 1,378,120,806 | I_kwDODunzps5SJHRm | 4,990 | "no-token" is passed to `huggingface_hub` when token is `None` | {
"avatar_url": "https://avatars.githubusercontent.com/u/11801849?v=4",
"events_url": "https://api.github.com/users/Wauplin/events{/privacy}",
"followers_url": "https://api.github.com/users/Wauplin/followers",
"following_url": "https://api.github.com/users/Wauplin/following{/other_user}",
"gists_url": "https://api.github.com/users/Wauplin/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Wauplin",
"id": 11801849,
"login": "Wauplin",
"node_id": "MDQ6VXNlcjExODAxODQ5",
"organizations_url": "https://api.github.com/users/Wauplin/orgs",
"received_events_url": "https://api.github.com/users/Wauplin/received_events",
"repos_url": "https://api.github.com/users/Wauplin/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Wauplin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Wauplin/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Wauplin"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @... | 2022-09-19T15:14:40Z | 2022-09-30T09:16:00Z | 2022-09-30T09:16:00Z | CONTRIBUTOR | null | null | null | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4990/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3411 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3411/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3411/comments | https://api.github.com/repos/huggingface/datasets/issues/3411/events | https://github.com/huggingface/datasets/issues/3411 | 1,075,846,272 | I_kwDODunzps5AIByA | 3,411 | [chinese wwm] load_datasets behavior not as expected when using run_mlm_wwm.py script | {
"avatar_url": "https://avatars.githubusercontent.com/u/52968111?v=4",
"events_url": "https://api.github.com/users/hyusterr/events{/privacy}",
"followers_url": "https://api.github.com/users/hyusterr/followers",
"following_url": "https://api.github.com/users/hyusterr/following{/other_user}",
"gists_url": "https://api.github.com/users/hyusterr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/hyusterr",
"id": 52968111,
"login": "hyusterr",
"node_id": "MDQ6VXNlcjUyOTY4MTEx",
"organizations_url": "https://api.github.com/users/hyusterr/orgs",
"received_events_url": "https://api.github.com/users/hyusterr/received_events",
"repos_url": "https://api.github.com/users/hyusterr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/hyusterr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hyusterr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/hyusterr"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | [] | null | [
"@LysandreJik not so sure who to @\r\nCould you help?",
"Hi @hyusterr, I believe it is @wlhgtc from https://github.com/huggingface/transformers/pull/9887"
] | 2021-12-09T17:54:35Z | 2021-12-22T11:21:33Z | null | NONE | null | null | null | ## Describe the bug
Model I am using (Bert, XLNet ...): bert-base-chinese
The problem arises when using:
* [https://github.com/huggingface/transformers/blob/master/examples/research_projects/mlm_wwm/run_mlm_wwm.py] the official example scripts: `rum_mlm_wwm.py`
The tasks I am working on is: pretraining whole word masking with my own dataset and ref.json file
I tried follow the run_mlm_wwm.py procedure to do whole word masking on pretraining task. my file is in .txt form, where one line represents one sample, with `9,264,784` chinese lines in total. the ref.json file is also contains 9,264,784 lines of whole word masking reference data for my chinese corpus. but when I try to adapt the run_mlm_wwm.py script, it shows that somehow after
`datasets["train"] = load_dataset(...`
`len(datasets["train"])` returns `9,265,365`
then, after `tokenized_datasets = datasets.map(...`
`len(tokenized_datasets["train"])` returns `9,265,279`
I'm really confused and tried to trace code by myself but can't know what happened after a week trial.
I want to know what happened in the `load_dataset()` function and `datasets.map` here and how did I get more lines of data than I input. so I'm here to ask.
## To reproduce
Sorry that I can't provide my data here since it did not belong to me. but I'm sure I remove the blank lines.
## Expected behavior
I expect the code run as it should. but the AssertionError in line 167 keeps raise as the line of reference json and datasets['train'] differs.
Thanks for your patient reading!
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.8.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3411/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3411/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/5282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5282/comments | https://api.github.com/repos/huggingface/datasets/issues/5282/events | https://github.com/huggingface/datasets/pull/5282 | 1,460,238,928 | PR_kwDODunzps5Det2_ | 5,282 | Release: 2.7.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [] | 2022-11-22T16:58:54Z | 2022-11-22T17:21:28Z | 2022-11-22T17:21:27Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5282",
"merged_at": "2022-11-22T17:21:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5282"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5282/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3733/comments | https://api.github.com/repos/huggingface/datasets/issues/3733/events | https://github.com/huggingface/datasets/issues/3733 | 1,140,011,378 | I_kwDODunzps5D8zFy | 3,733 | Bugs in NewsQA dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/o... | null | [] | 2022-02-16T13:17:37Z | 2022-02-17T07:54:25Z | 2022-02-17T07:54:25Z | MEMBER | null | null | null | ## Describe the bug
NewsQA dataset has the following bugs:
- the field `validated_answers` is an exact copy of the field `answers` but with the addition of `'count': [0]` to each dict
- the field `badQuestion` does not appear in `answers` nor `validated_answers`
## Steps to reproduce the bug
By inspecting the dataset script we can see that:
- the parsing of `validated_answers` is a copy-paste of the one for `answers`
- the `badQuestion` field is ignored in the parsing of both `answers` and `validated_answers`
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3733/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/6121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6121/comments | https://api.github.com/repos/huggingface/datasets/issues/6121/events | https://github.com/huggingface/datasets/pull/6121 | 1,836,761,712 | PR_kwDODunzps5XMsWd | 6,121 | Small typo in the code example of create imagefolder dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/19688994?v=4",
"events_url": "https://api.github.com/users/WangXin93/events{/privacy}",
"followers_url": "https://api.github.com/users/WangXin93/followers",
"following_url": "https://api.github.com/users/WangXin93/following{/other_user}",
"gists_url": "https://api.github.com/users/WangXin93/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/WangXin93",
"id": 19688994,
"login": "WangXin93",
"node_id": "MDQ6VXNlcjE5Njg4OTk0",
"organizations_url": "https://api.github.com/users/WangXin93/orgs",
"received_events_url": "https://api.github.com/users/WangXin93/received_events",
"repos_url": "https://api.github.com/users/WangXin93/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/WangXin93/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/WangXin93/subscriptions",
"type": "User",
"url": "https://api.github.com/users/WangXin93"
} | [] | closed | false | null | [] | null | [
"Hi,\r\n\r\nI found a small typo in the code example of create imagefolder dataset. It confused me a little when I first saw it.\r\n\r\nBest Regards.\r\n\r\nXin"
] | 2023-08-04T13:36:59Z | 2023-08-04T13:45:32Z | 2023-08-04T13:41:43Z | NONE | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6121",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/6121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6121"
} | Fix type of code example of load imagefolder dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6121/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2842/comments | https://api.github.com/repos/huggingface/datasets/issues/2842/events | https://github.com/huggingface/datasets/issues/2842 | 980,725,899 | MDU6SXNzdWU5ODA3MjU4OTk= | 2,842 | always requiring the username in the dataset name when there is one | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://api.github.com/users/stas00/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/stas00",
"id": 10676103,
"login": "stas00",
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"organizations_url": "https://api.github.com/users/stas00/orgs",
"received_events_url": "https://api.github.com/users/stas00/received_events",
"repos_url": "https://api.github.com/users/stas00/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stas00/subscriptions",
"type": "User",
"url": "https://api.github.com/users/stas00"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix?",
"I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:\r\n```\r\n# first run\r\... | 2021-08-26T23:31:53Z | 2021-10-22T09:43:35Z | 2021-10-22T09:43:35Z | CONTRIBUTOR | null | null | null | Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due.
So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software and things broke, since there is no `openwebtext-10k`
So this feature request is asking to tighten the checking and not allow dataset loading if it was downloaded with the user prefix, but then attempted to be used w/o it.
The same in code:
```
# first run
python -c "from datasets import load_dataset; load_dataset('stas/openwebtext-10k')"
# now run immediately
python -c "from datasets import load_dataset; load_dataset('openwebtext-10k')"
# the second command should fail, but it doesn't fail now.
```
Please let me know if I explained myself clearly.
Thank you! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2842/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2842/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5737 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5737/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5737/comments | https://api.github.com/repos/huggingface/datasets/issues/5737/events | https://github.com/huggingface/datasets/issues/5737 | 1,662,919,811 | I_kwDODunzps5jHiSD | 5,737 | ClassLabel Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/10896776?v=4",
"events_url": "https://api.github.com/users/mrcaelumn/events{/privacy}",
"followers_url": "https://api.github.com/users/mrcaelumn/followers",
"following_url": "https://api.github.com/users/mrcaelumn/following{/other_user}",
"gists_url": "https://api.github.com/users/mrcaelumn/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mrcaelumn",
"id": 10896776,
"login": "mrcaelumn",
"node_id": "MDQ6VXNlcjEwODk2Nzc2",
"organizations_url": "https://api.github.com/users/mrcaelumn/orgs",
"received_events_url": "https://api.github.com/users/mrcaelumn/received_events",
"repos_url": "https://api.github.com/users/mrcaelumn/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mrcaelumn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mrcaelumn/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mrcaelumn"
} | [] | closed | false | null | [] | null | [
"Hi, you can use the `cast_column` function to change the feature type from a `Value(int64)` to `ClassLabel`:\r\n\r\n```py\r\ndataset = dataset.cast_column(\"label\", ClassLabel(names=[\"label_1\", \"label_2\", \"label_3\"]))\r\nprint(dataset.features)\r\n{'text': Value(dtype='string', id=None),\r\n 'label': ClassL... | 2023-04-11T17:14:13Z | 2023-04-13T16:49:57Z | 2023-04-13T16:49:57Z | NONE | null | null | null | ### Describe the bug
I still getting the error "call() takes 1 positional argument but 2 were given" even after ensuring that the value being passed to the label object is a single value and that the ClassLabel object has been created with the correct number of label classes
### Steps to reproduce the bug
from datasets import ClassLabel, Dataset
1. Create the ClassLabel object with 3 label values and their corresponding names
label_test = ClassLabel(num_classes=3, names=["label_1", "label_2", "label_3"])
2. Define a dictionary with text and label fields
data = {
'text': ['text_1', 'text_2', 'text_3'],
'label': [1, 2, 3],
}
3. Create a Hugging Face dataset from the dictionary
dataset = Dataset.from_dict(data)
print(dataset.features)
4. Map the label values to their corresponding label names using the label object
dataset = dataset.map(lambda example: {'text': example['text'], 'label': label_test(example['label'])})
5. Print the resulting dataset
print(dataset)
### Expected behavior
I hope my label type is class label instead int.
### Environment info
python 3.9
google colab | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5737/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5737/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/5083 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5083/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5083/comments | https://api.github.com/repos/huggingface/datasets/issues/5083/events | https://github.com/huggingface/datasets/issues/5083 | 1,399,842,514 | I_kwDODunzps5Tb-bS | 5,083 | Support numpy/torch/tf/jax formatting for IterableDataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "fef2c0",
"default": fals... | closed | false | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists... | null | [
"hii @lhoestq, can you assign this issue to me? Though i am new to open source still I would love to put my best foot forward. I can see there isn't anyone right now assigned to this issue.",
"Hi @zutarich ! This issue was fixed by #5852 - sorry I forgot to close it\r\n\r\nFeel free to look for other issues and p... | 2022-10-06T15:14:58Z | 2023-10-09T12:42:15Z | 2023-10-09T12:42:15Z | MEMBER | null | null | null | Right now `IterableDataset` doesn't do any formatting.
In particular this code should return a numpy array:
```python
from datasets import load_dataset
ds = load_dataset("imagenet-1k", split="train", streaming=True).with_format("np")
print(next(iter(ds))["image"])
```
Right now it returns a PIL.Image.
Setting `streaming=False` does return a numpy array after #5072 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5083/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5083/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/2907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2907/comments | https://api.github.com/repos/huggingface/datasets/issues/2907/events | https://github.com/huggingface/datasets/pull/2907 | 995,968,152 | PR_kwDODunzps4rumOy | 2,907 | add story_cloze dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"events_url": "https://api.github.com/users/zaidalyafeai/events{/privacy}",
"followers_url": "https://api.github.com/users/zaidalyafeai/followers",
"following_url": "https://api.github.com/users/zaidalyafeai/following{/other_user}",
"gists_url": "https://api.github.com/users/zaidalyafeai/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/zaidalyafeai",
"id": 15667714,
"login": "zaidalyafeai",
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"organizations_url": "https://api.github.com/users/zaidalyafeai/orgs",
"received_events_url": "https://api.github.com/users/zaidalyafeai/received_events",
"repos_url": "https://api.github.com/users/zaidalyafeai/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/zaidalyafeai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zaidalyafeai/subscriptions",
"type": "User",
"url": "https://api.github.com/users/zaidalyafeai"
} | [] | closed | false | null | [] | null | [
"Will create a new one, this one seems to be missed up. "
] | 2021-09-14T12:36:53Z | 2021-10-08T21:41:42Z | 2021-10-08T21:41:41Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2907.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2907",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2907.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2907"
} | @lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2907/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2907/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3647/comments | https://api.github.com/repos/huggingface/datasets/issues/3647/events | https://github.com/huggingface/datasets/pull/3647 | 1,117,383,675 | PR_kwDODunzps4xvGDQ | 3,647 | Fix `add_column` on datasets with indices mapping | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"Sure, let's include this in today's release.",
"Cool ! The windows CI should be fixed on master now, feel free to merge :)"
] | 2022-01-28T13:06:29Z | 2022-01-28T15:35:58Z | 2022-01-28T15:35:58Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3647.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3647",
"merged_at": "2022-01-28T15:35:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3647.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3647"
} | My initial idea was to avoid the `flatten_indices` call and reorder a new column instead, but in the end I decided to follow `concatenate_datasets` and use `flatten_indices` to avoid padding when `dataset._indices.num_rows != dataset._data.num_rows`.
Fix #3599 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3647/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3647/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2853/comments | https://api.github.com/repos/huggingface/datasets/issues/2853/events | https://github.com/huggingface/datasets/pull/2853 | 983,692,026 | MDExOlB1bGxSZXF1ZXN0NzIzMjI4NDY3 | 2,853 | Add AMI dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/cahya-wirawan",
"id": 7669893,
"login": "cahya-wirawan",
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"type": "User",
"url": "https://api.github.com/users/cahya-wirawan"
} | [] | closed | false | null | [] | null | [
"Hey @cahya-wirawan, \r\n\r\nI played around with the dataset a bit and it looks already very good to me! That's exactly how it should be constructed :-) I can help you a bit with defining the config, etc... on Monday!",
"@lhoestq - I think the dataset is ready to be merged :-) \r\n\r\nAt the moment, I don't real... | 2021-08-31T10:19:01Z | 2021-09-29T09:19:19Z | 2021-09-29T09:19:19Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2853.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2853",
"merged_at": "2021-09-29T09:19:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2853.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2853"
} | This is an initial commit for AMI dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2853/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3692/comments | https://api.github.com/repos/huggingface/datasets/issues/3692/events | https://github.com/huggingface/datasets/pull/3692 | 1,128,320,004 | PR_kwDODunzps4yShiu | 3,692 | Update data URL in pubmed dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"- I updated the previous dummy data: I just had to rename the file and its directory\r\n - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz`\r\n\r\nThen I discover it fails: https://app.circleci.com/pipelines/github/huggingface/datasets/9800/workflows/173a4433-8feb-4fc6-ab9e-59762084e3e1/jobs... | 2022-02-09T10:06:21Z | 2022-02-14T14:15:42Z | 2022-02-14T14:15:41Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3692.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3692",
"merged_at": "2022-02-14T14:15:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3692.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3692"
} | Fix #3655. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3692/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3692/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/4366 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4366/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4366/comments | https://api.github.com/repos/huggingface/datasets/issues/4366/events | https://github.com/huggingface/datasets/issues/4366 | 1,239,534,165 | I_kwDODunzps5J4cpV | 4,366 | TypeError: __init__() missing 1 required positional argument: 'scheme' | {
"avatar_url": "https://avatars.githubusercontent.com/u/99231535?v=4",
"events_url": "https://api.github.com/users/jffgitt/events{/privacy}",
"followers_url": "https://api.github.com/users/jffgitt/followers",
"following_url": "https://api.github.com/users/jffgitt/following{/other_user}",
"gists_url": "https://api.github.com/users/jffgitt/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/jffgitt",
"id": 99231535,
"login": "jffgitt",
"node_id": "U_kgDOBeonLw",
"organizations_url": "https://api.github.com/users/jffgitt/orgs",
"received_events_url": "https://api.github.com/users/jffgitt/received_events",
"repos_url": "https://api.github.com/users/jffgitt/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/jffgitt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jffgitt/subscriptions",
"type": "User",
"url": "https://api.github.com/users/jffgitt"
} | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | [] | null | [
"Duplicate of:\r\n- #3956\r\n\r\nI think you should report that issue to `elasticsearch` library: https://github.com/elastic/elasticsearch-py"
] | 2022-05-18T07:17:29Z | 2022-05-18T16:36:22Z | 2022-05-18T16:36:21Z | NONE | null | null | null | "name" : "node-1",
"cluster_name" : "elasticsearch",
"cluster_uuid" : "",
"version" : {
"number" : "7.5.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "",
"build_date" : "2019-11-26T01:06:52.518245Z",
"build_snapshot" : false,
"lucene_version" : "8.3.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
when I run the order:
nohup python3 custom_service.pyc > service.log 2>&1&
the log:
nohup: 忽略输入
Traceback (most recent call last):
File "/home/xfz/p3_custom_test/custom_service.py", line 55, in <module>
File "/home/xfz/p3_custom_test/custom_service.py", line 48, in doInitialize
File "custom_impl.py", line 286, in custom_setup
File "custom_impl.py", line 127, in create_es_index
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/__init__.py", line 345, in __init__
ssl_show_warn=ssl_show_warn,
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 105, in client_node_configs
node_configs = hosts_to_node_configs(hosts)
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 154, in hosts_to_node_configs
node_configs.append(host_mapping_to_node_config(host))
File "/usr/local/lib/python3.7/site-packages/elasticsearch/_sync/client/utils.py", line 221, in host_mapping_to_node_config
return NodeConfig(**options) # type: ignore
TypeError: __init__() missing 1 required positional argument: 'scheme'
[1]+ 退出 1 nohup python3 custom_service.pyc > service.log 2>&1
custom_service_pyc can't running
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4366/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4366/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/3506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3506/comments | https://api.github.com/repos/huggingface/datasets/issues/3506/events | https://github.com/huggingface/datasets/pull/3506 | 1,091,166,595 | PR_kwDODunzps4wZpot | 3,506 | Allows DatasetDict.filter to have batching option | {
"avatar_url": "https://avatars.githubusercontent.com/u/24695242?v=4",
"events_url": "https://api.github.com/users/thomasw21/events{/privacy}",
"followers_url": "https://api.github.com/users/thomasw21/followers",
"following_url": "https://api.github.com/users/thomasw21/following{/other_user}",
"gists_url": "https://api.github.com/users/thomasw21/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/thomasw21",
"id": 24695242,
"login": "thomasw21",
"node_id": "MDQ6VXNlcjI0Njk1MjQy",
"organizations_url": "https://api.github.com/users/thomasw21/orgs",
"received_events_url": "https://api.github.com/users/thomasw21/received_events",
"repos_url": "https://api.github.com/users/thomasw21/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/thomasw21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/thomasw21/subscriptions",
"type": "User",
"url": "https://api.github.com/users/thomasw21"
} | [] | closed | false | null | [] | null | [] | 2021-12-30T15:22:22Z | 2022-01-04T10:24:28Z | 2022-01-04T10:24:27Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3506.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3506",
"merged_at": "2022-01-04T10:24:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3506.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3506"
} | - Related to: #3244
- Fixes: #3503
We extends `.filter( ... batched: bool)` support to DatasetDict. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3506/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3506/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5844/comments | https://api.github.com/repos/huggingface/datasets/issues/5844/events | https://github.com/huggingface/datasets/issues/5844 | 1,705,907,812 | I_kwDODunzps5lrhZk | 5,844 | TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/54010030?v=4",
"events_url": "https://api.github.com/users/chen-coding/events{/privacy}",
"followers_url": "https://api.github.com/users/chen-coding/followers",
"following_url": "https://api.github.com/users/chen-coding/following{/other_user}",
"gists_url": "https://api.github.com/users/chen-coding/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/chen-coding",
"id": 54010030,
"login": "chen-coding",
"node_id": "MDQ6VXNlcjU0MDEwMDMw",
"organizations_url": "https://api.github.com/users/chen-coding/orgs",
"received_events_url": "https://api.github.com/users/chen-coding/received_events",
"repos_url": "https://api.github.com/users/chen-coding/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/chen-coding/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/chen-coding/subscriptions",
"type": "User",
"url": "https://api.github.com/users/chen-coding"
} | [] | open | false | null | [] | null | [] | 2023-05-11T14:15:01Z | 2023-05-11T14:15:01Z | null | NONE | null | null | null | ### Describe the bug
TypeError: Couldn't cast array of type struct<answer: struct<unanswerable: bool, answerType: string, free_form_answer: string, evidence: list<item: string>, evidenceAnnotate: list<item: string>, highlighted_evidence: list<item: string>>> to {'answer': {'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}, 'unanswerable': Value(dtype='bool', id=None), 'answerType': Value(dtype='string', id=None), 'free_form_answer': Value(dtype='string', id=None), 'evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'evidenceAnnotate': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'highlighted_evidence': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None)}
When I use _load_dataset()_ I get the error
`from datasets import load_dataset
datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
`
Detailed error information is as follows:
Traceback (most recent call last):
File "C:/Users/CHENJIALEI/Desktop/NLPCC2023/NLPCC23_SciMRC-main/test2.py", line 9, in <module>
raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\load.py", line 1747, in load_dataset
builder_instance.download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 814, in download_and_prepare
self._download_and_prepare(
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 905, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\builder.py", line 1521, in _prepare_split
writer.write_table(table)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\arrow_writer.py", line 540, in write_table
pa_table = table_cast(pa_table, self._schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2069, in table_cast
return cast_table_to_schema(table, schema)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in cast_table_to_schema
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 2031, in <listcomp>
arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in wrapper
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1740, in <listcomp>
return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in cast_array_to_feature
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1862, in <listcomp>
arrays = [_c(array.field(name), subfeature) for name, subfeature in feature.items()]
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1867, in cast_array_to_feature
casted_values = _c(array.values, feature[0])
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1742, in wrapper
return func(array, *args, **kwargs)
File "D:\Environment\anaconda3\envs\test\lib\site-packages\datasets\table.py", line 1913, in cast_array_to_feature
raise TypeError(f"Couldn't cast array of type\n{array.type}\nto\n{feature}")
It is successful when I load the data separately
`raw_data = load_dataset("json", data_files="./data/train.json", cache_dir="./cache")`
### Steps to reproduce the bug
1.from datasets import load_dataset
2.datafiles = {'train': './data/train.json', 'validation': './data/validation.json', 'test': './data/test.json'}
3.raw_data = load_dataset("json", data_files=datafiles, cache_dir="./cache")
### Expected behavior
Successfully load dataset
### Environment info
datasets == 2.6.1
pyarrow == 8.0.0
python == 3.8
platform:windows11 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5844/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/2836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2836/comments | https://api.github.com/repos/huggingface/datasets/issues/2836/events | https://github.com/huggingface/datasets/pull/2836 | 979,230,142 | MDExOlB1bGxSZXF1ZXN0NzE5NjY5MDUy | 2,836 | Optimize Dataset.filter to only compute the indices to keep | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [] | closed | false | null | [] | null | [
"Maybe worth updating the docs here as well?",
"Yup, will do !"
] | 2021-08-25T14:41:22Z | 2021-09-14T14:51:53Z | 2021-09-13T15:50:21Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2836.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2836",
"merged_at": "2021-09-13T15:50:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2836.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2836"
} | Optimize `Dataset.filter` to only compute the indices of the rows to keep, instead of creating a new Arrow table with the rows to keep. Creating a new table was an issue because it could take a lot of disk space.
This will be useful to process audio datasets for example cc @patrickvonplaten | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2836/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2836/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/3537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3537/comments | https://api.github.com/repos/huggingface/datasets/issues/3537/events | https://github.com/huggingface/datasets/pull/3537 | 1,094,738,734 | PR_kwDODunzps4wlH1d | 3,537 | added PII statements and license links to data cards | {
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"events_url": "https://api.github.com/users/mcmillanmajora/events{/privacy}",
"followers_url": "https://api.github.com/users/mcmillanmajora/followers",
"following_url": "https://api.github.com/users/mcmillanmajora/following{/other_user}",
"gists_url": "https://api.github.com/users/mcmillanmajora/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mcmillanmajora",
"id": 26722925,
"login": "mcmillanmajora",
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"organizations_url": "https://api.github.com/users/mcmillanmajora/orgs",
"received_events_url": "https://api.github.com/users/mcmillanmajora/received_events",
"repos_url": "https://api.github.com/users/mcmillanmajora/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mcmillanmajora/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mcmillanmajora/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mcmillanmajora"
} | [] | closed | false | null | [] | null | [] | 2022-01-05T20:59:21Z | 2022-01-05T22:02:37Z | 2022-01-05T22:02:37Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/3537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3537",
"merged_at": "2022-01-05T22:02:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3537"
} | Updates for the following datacards:
multilingual_librispeech
openslr
speech commands
superb
timit_asr
vctk | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3537/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/6119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6119/comments | https://api.github.com/repos/huggingface/datasets/issues/6119/events | https://github.com/huggingface/datasets/pull/6119 | 1,835,996,350 | PR_kwDODunzps5XKI19 | 6,119 | [Docs] Add description of `select_columns` to guide | {
"avatar_url": "https://avatars.githubusercontent.com/u/18213435?v=4",
"events_url": "https://api.github.com/users/unifyh/events{/privacy}",
"followers_url": "https://api.github.com/users/unifyh/followers",
"following_url": "https://api.github.com/users/unifyh/following{/other_user}",
"gists_url": "https://api.github.com/users/unifyh/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/unifyh",
"id": 18213435,
"login": "unifyh",
"node_id": "MDQ6VXNlcjE4MjEzNDM1",
"organizations_url": "https://api.github.com/users/unifyh/orgs",
"received_events_url": "https://api.github.com/users/unifyh/received_events",
"repos_url": "https://api.github.com/users/unifyh/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/unifyh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/unifyh/subscriptions",
"type": "User",
"url": "https://api.github.com/users/unifyh"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-08-04T03:13:30Z | 2023-08-16T10:13:02Z | 2023-08-16T10:02:52Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/6119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6119",
"merged_at": "2023-08-16T10:02:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6119"
} | Closes #6116 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6119/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2519/comments | https://api.github.com/repos/huggingface/datasets/issues/2519/events | https://github.com/huggingface/datasets/pull/2519 | 924,903,240 | MDExOlB1bGxSZXF1ZXN0NjczNDcyMzYy | 2,519 | Improve performance of pandas arrow extractor | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"Looks like this change\r\n```\r\npa_table[pa_table.column_names[0]].to_pandas(types_mapper=pandas_types_mapper)\r\n```\r\ndoesn't return a Series with the correct type.\r\nThis is related to https://issues.apache.org/jira/browse/ARROW-9664\r\n\r\nSince the types_mapper isn't taken into account, the ArrayXD types a... | 2021-06-18T13:24:41Z | 2021-06-21T09:06:06Z | 2021-06-21T09:06:06Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2519.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2519",
"merged_at": "2021-06-21T09:06:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2519.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2519"
} | While reviewing PR #2505, I noticed that pandas arrow extractor could be refactored to be faster. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2519/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2519/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1431 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1431/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1431/comments | https://api.github.com/repos/huggingface/datasets/issues/1431/events | https://github.com/huggingface/datasets/pull/1431 | 760,791,019 | MDExOlB1bGxSZXF1ZXN0NTM1NTYzOTk1 | 1,431 | Ar cov19 | {
"avatar_url": "https://avatars.githubusercontent.com/u/71061623?v=4",
"events_url": "https://api.github.com/users/Fatima-Haouari/events{/privacy}",
"followers_url": "https://api.github.com/users/Fatima-Haouari/followers",
"following_url": "https://api.github.com/users/Fatima-Haouari/following{/other_user}",
"gists_url": "https://api.github.com/users/Fatima-Haouari/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/Fatima-Haouari",
"id": 71061623,
"login": "Fatima-Haouari",
"node_id": "MDQ6VXNlcjcxMDYxNjIz",
"organizations_url": "https://api.github.com/users/Fatima-Haouari/orgs",
"received_events_url": "https://api.github.com/users/Fatima-Haouari/received_events",
"repos_url": "https://api.github.com/users/Fatima-Haouari/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/Fatima-Haouari/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Fatima-Haouari/subscriptions",
"type": "User",
"url": "https://api.github.com/users/Fatima-Haouari"
} | [] | closed | false | null | [] | null | [
"merging since the CI is fixed on master"
] | 2020-12-10T00:59:34Z | 2020-12-11T15:01:23Z | 2020-12-11T15:01:23Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1431.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1431",
"merged_at": "2020-12-11T15:01:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1431.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1431"
} | Adding ArCOV-19 dataset. ArCOV-19 is an Arabic COVID-19 Twitter dataset that covers the period from 27th of January till 30th of April 2020. ArCOV-19 is the first publicly-available Arabic Twitter dataset covering COVID-19 pandemic that includes over 1M tweets alongside the propagation networks of the most-popular subset of them (i.e., most-retweeted and-liked). The propagation networks include both retweets and conversational threads (i.e., threads of replies). ArCOV-19 is designed to enable research under several domains including natural language processing, information retrieval, and social computing, among others. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1431/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1431/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1283 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1283/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1283/comments | https://api.github.com/repos/huggingface/datasets/issues/1283/events | https://github.com/huggingface/datasets/pull/1283 | 759,251,457 | MDExOlB1bGxSZXF1ZXN0NTM0Mjg4MDg2 | 1,283 | Add dutch book review dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4",
"events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}",
"followers_url": "https://api.github.com/users/benjaminvdb/followers",
"following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}",
"gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/benjaminvdb",
"id": 8875786,
"login": "benjaminvdb",
"node_id": "MDQ6VXNlcjg4NzU3ODY=",
"organizations_url": "https://api.github.com/users/benjaminvdb/orgs",
"received_events_url": "https://api.github.com/users/benjaminvdb/received_events",
"repos_url": "https://api.github.com/users/benjaminvdb/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions",
"type": "User",
"url": "https://api.github.com/users/benjaminvdb"
} | [] | closed | false | null | [] | null | [
"> Really cool thanks !\r\n> \r\n> I left some (minor) comments\r\n\r\nThank you for your comments! 👍 I went ahead and improved the dataset card using your suggestions and some tweaks of my own. I hope you like it! 😄"
] | 2020-12-08T08:50:48Z | 2020-12-09T20:21:58Z | 2020-12-09T17:25:25Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1283.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1283",
"merged_at": "2020-12-09T17:25:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1283.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1283"
} | - Name: Dutch Book Review Dataset (DBRD)
- Description: The DBRD (pronounced dee-bird) dataset contains over 110k book reviews along with associated binary sentiment polarity labels and is intended as a benchmark for sentiment classification in Dutch.
- Paper: https://arxiv.org/abs/1910.00896
- Data: https://github.com/benjaminvdb/DBRD
- Motivation: A large (real-life) dataset of Dutch book reviews and sentiment polarity (positive/negative), based on the associated rating.
Checks
- [x] Create the dataset script /datasets/dbrd/dbrd.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _info(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template : fill the tags and the various paragraphs
- [x] Both tests for the real data and the dummy data pass. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1283/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1283/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1299/comments | https://api.github.com/repos/huggingface/datasets/issues/1299/events | https://github.com/huggingface/datasets/issues/1299 | 759,414,566 | MDU6SXNzdWU3NTk0MTQ1NjY= | 1,299 | can't load "german_legal_entity_recognition" dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/59837137?v=4",
"events_url": "https://api.github.com/users/nataly-obr/events{/privacy}",
"followers_url": "https://api.github.com/users/nataly-obr/followers",
"following_url": "https://api.github.com/users/nataly-obr/following{/other_user}",
"gists_url": "https://api.github.com/users/nataly-obr/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/nataly-obr",
"id": 59837137,
"login": "nataly-obr",
"node_id": "MDQ6VXNlcjU5ODM3MTM3",
"organizations_url": "https://api.github.com/users/nataly-obr/orgs",
"received_events_url": "https://api.github.com/users/nataly-obr/received_events",
"repos_url": "https://api.github.com/users/nataly-obr/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/nataly-obr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/nataly-obr/subscriptions",
"type": "User",
"url": "https://api.github.com/users/nataly-obr"
} | [] | closed | false | null | [] | null | [
"Please if you could tell me more about the error? \r\n\r\n1. Please check the directory you've been working on\r\n2. Check for any typos",
"> Please if you could tell me more about the error?\r\n> \r\n> 1. Please check the directory you've been working on\r\n> 2. Check for any typos\r\n\r\nError happens during t... | 2020-12-08T12:42:01Z | 2020-12-16T16:03:13Z | 2020-12-16T16:03:13Z | NONE | null | null | null | FileNotFoundError: Couldn't find file locally at german_legal_entity_recognition/german_legal_entity_recognition.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/german_legal_entity_recognition/german_legal_entity_recognition.py
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1299/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/4818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4818/comments | https://api.github.com/repos/huggingface/datasets/issues/4818/events | https://github.com/huggingface/datasets/pull/4818 | 1,334,941,810 | PR_kwDODunzps48-W7a | 4,818 | Add add cc-by-sa-2.5 license tag | {
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/polinaeterna",
"id": 16348744,
"login": "polinaeterna",
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"type": "User",
"url": "https://api.github.com/users/polinaeterna"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4818). All of your documentation changes will be reflected on that endpoint.",
"I think we can close this PR because the `standard_licenses.tsv` file was removed from this repo and we no longer perform any dataset card validati... | 2022-08-10T17:18:39Z | 2022-10-04T13:47:24Z | 2022-10-04T13:47:24Z | CONTRIBUTOR | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/4818.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4818",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4818.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4818"
} | - [ ] add it to moon-landing
- [ ] add it to hub-docs | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4818/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5552/comments | https://api.github.com/repos/huggingface/datasets/issues/5552/events | https://github.com/huggingface/datasets/pull/5552 | 1,592,186,703 | PR_kwDODunzps5KXMjA | 5,552 | Make tiktoken tokenizers hashable | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | 2023-02-20T16:50:09Z | 2023-02-21T13:20:42Z | 2023-02-21T13:13:05Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5552",
"merged_at": "2023-02-21T13:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5552"
} | Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5552/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5299 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5299/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5299/comments | https://api.github.com/repos/huggingface/datasets/issues/5299/events | https://github.com/huggingface/datasets/pull/5299 | 1,464,695,091 | PR_kwDODunzps5Dt3Sk | 5,299 | Fix xopen for Windows pathnames | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-11-25T15:35:28Z | 2022-11-29T08:23:58Z | 2022-11-29T08:21:24Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5299.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5299",
"merged_at": "2022-11-29T08:21:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5299.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5299"
} | This PR fixes a bug in `xopen` function for Windows pathnames.
Fix #5298. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5299/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5299/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5247/comments | https://api.github.com/repos/huggingface/datasets/issues/5247/events | https://github.com/huggingface/datasets/pull/5247 | 1,451,297,749 | PR_kwDODunzps5DAhto | 5,247 | Set dev version | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5247). All of your documentation changes will be reflected on that endpoint."
] | 2022-11-16T10:17:31Z | 2022-11-16T10:22:20Z | 2022-11-16T10:17:50Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5247",
"merged_at": "2022-11-16T10:17:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5247"
} | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5247/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5077/comments | https://api.github.com/repos/huggingface/datasets/issues/5077/events | https://github.com/huggingface/datasets/pull/5077 | 1,398,080,859 | PR_kwDODunzps5AOs9L | 5,077 | Fix passed download_config in HubDatasetModuleFactoryWithoutScript | {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
} | [] | closed | false | null | [] | null | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | 2022-10-05T16:42:36Z | 2022-10-06T05:31:22Z | 2022-10-06T05:29:06Z | MEMBER | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5077.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5077",
"merged_at": "2022-10-06T05:29:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5077.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5077"
} | Fix passed `download_config` in `HubDatasetModuleFactoryWithoutScript`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5077/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1584/comments | https://api.github.com/repos/huggingface/datasets/issues/1584/events | https://github.com/huggingface/datasets/pull/1584 | 768,820,406 | MDExOlB1bGxSZXF1ZXN0NTQxMTM2OTQ5 | 1,584 | Load hind encorp | {
"avatar_url": "https://avatars.githubusercontent.com/u/56379013?v=4",
"events_url": "https://api.github.com/users/rahul-art/events{/privacy}",
"followers_url": "https://api.github.com/users/rahul-art/followers",
"following_url": "https://api.github.com/users/rahul-art/following{/other_user}",
"gists_url": "https://api.github.com/users/rahul-art/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/rahul-art",
"id": 56379013,
"login": "rahul-art",
"node_id": "MDQ6VXNlcjU2Mzc5MDEz",
"organizations_url": "https://api.github.com/users/rahul-art/orgs",
"received_events_url": "https://api.github.com/users/rahul-art/received_events",
"repos_url": "https://api.github.com/users/rahul-art/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/rahul-art/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rahul-art/subscriptions",
"type": "User",
"url": "https://api.github.com/users/rahul-art"
} | [] | closed | false | null | [] | null | [] | 2020-12-16T12:38:38Z | 2020-12-18T02:27:24Z | 2020-12-18T02:27:24Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/1584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1584",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1584"
} | reformated well documented, yaml tags added, code | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1584/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/2770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2770/comments | https://api.github.com/repos/huggingface/datasets/issues/2770/events | https://github.com/huggingface/datasets/pull/2770 | 963,246,512 | MDExOlB1bGxSZXF1ZXN0NzA1OTAzMzIy | 2,770 | Add support for fast tokenizer in BertScore | {
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/mariosasko",
"id": 47462742,
"login": "mariosasko",
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"type": "User",
"url": "https://api.github.com/users/mariosasko"
} | [] | closed | false | null | [] | null | [] | 2021-08-07T15:00:03Z | 2021-08-09T12:34:43Z | 2021-08-09T11:16:25Z | CONTRIBUTOR | null | 0 | {
"diff_url": "https://github.com/huggingface/datasets/pull/2770.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2770",
"merged_at": "2021-08-09T11:16:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2770.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2770"
} | This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib.
Fixes #2765 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2770/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/5018 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5018/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5018/comments | https://api.github.com/repos/huggingface/datasets/issues/5018/events | https://github.com/huggingface/datasets/pull/5018 | 1,384,146,585 | PR_kwDODunzps4_hA0V | 5,018 | Create all YAML dataset_info | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/lhoestq",
"id": 42851186,
"login": "lhoestq",
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"type": "User",
"url": "https://api.github.com/users/lhoestq"
} | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | [] | null | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5018). All of your documentation changes will be reflected on that endpoint.",
"Closing since https://github.com/huggingface/datasets/pull/4974 removed all the datasets scripts.\r\n\r\nIndividual PRs must be opened on the Huggi... | 2022-09-23T18:08:15Z | 2023-09-24T09:33:21Z | 2022-10-03T17:08:05Z | MEMBER | null | 1 | {
"diff_url": "https://github.com/huggingface/datasets/pull/5018.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5018",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5018.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5018"
} | Following https://github.com/huggingface/datasets/pull/4926
Creates all the `dataset_info` YAML fields in the dataset cards
The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926
Needs https://github.com/huggingface/datasets/pull/4926 to be merged first | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5018/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5018/timeline | null | null | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.