url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1640/comments | https://api.github.com/repos/huggingface/datasets/issues/1640/events | https://github.com/huggingface/datasets/pull/1640 | 774,921,836 | MDExOlB1bGxSZXF1ZXN0NTQ1NzI2NzY4 | 1,640 | Fix "'BertTokenizerFast' object has no attribute 'max_len'" | [] | closed | false | null | 0 | 2020-12-26T19:25:41Z | 2020-12-28T17:26:35Z | 2020-12-28T17:26:35Z | null | Tensorflow 2.3.0 gives:
FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
Tensorflow 2.4.0 gives:
AttributeError 'BertTokenizerFast' object has no attribute 'max_len' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1640/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1640/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1640.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1640",
"merged_at": "2020-12-28T17:26:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1640.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1640"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1328 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1328/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1328/comments | https://api.github.com/repos/huggingface/datasets/issues/1328/events | https://github.com/huggingface/datasets/pull/1328 | 759,634,907 | MDExOlB1bGxSZXF1ZXN0NTM0NjA2MDM1 | 1,328 | Added the NewsPH Raw dataset and corresponding dataset card | [] | closed | false | null | 0 | 2020-12-08T17:25:45Z | 2020-12-10T11:04:34Z | 2020-12-10T11:04:34Z | null | This PR adds the original NewsPH dataset which is used to autogenerate the NewsPH-NLI dataset. Reopened a new PR as the previous one had problems.
Paper: https://arxiv.org/abs/2010.11574
Repo: https://github.com/jcblaisecruz02/Filipino-Text-Benchmarks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1328/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1328/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1328.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1328",
"merged_at": "2020-12-10T11:04:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1328.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1328"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4938/comments | https://api.github.com/repos/huggingface/datasets/issues/4938/events | https://github.com/huggingface/datasets/pull/4938 | 1,363,429,228 | PR_kwDODunzps4-coaB | 4,938 | Remove main branch rename notice | [] | closed | false | null | 1 | 2022-09-06T15:03:05Z | 2022-09-06T16:46:11Z | 2022-09-06T16:43:53Z | null | We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4938/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"merged_at": "2022-09-06T16:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4938"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4253 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4253/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4253/comments | https://api.github.com/repos/huggingface/datasets/issues/4253/events | https://github.com/huggingface/datasets/pull/4253 | 1,219,286,408 | PR_kwDODunzps42-c8Q | 4,253 | Create metric cards for mean IOU | [] | closed | false | null | 1 | 2022-04-28T20:58:27Z | 2022-04-29T17:44:47Z | 2022-04-29T17:38:06Z | null | Proposing a metric card for mIoU :rocket:
sorry for spamming you with review requests, @albertvillanova ! :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4253/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4253/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4253.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4253",
"merged_at": "2022-04-29T17:38:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4253.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4253"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2658/comments | https://api.github.com/repos/huggingface/datasets/issues/2658/events | https://github.com/huggingface/datasets/issues/2658 | 946,139,532 | MDU6SXNzdWU5NDYxMzk1MzI= | 2,658 | Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv | [] | closed | false | null | 0 | 2021-07-16T10:05:44Z | 2021-07-16T12:46:06Z | 2021-07-16T12:46:06Z | null | When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator.
Related to https://github.com/huggingface/datasets/pull/2656
cc @SBrandeis | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2658/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2658/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2331/comments | https://api.github.com/repos/huggingface/datasets/issues/2331/events | https://github.com/huggingface/datasets/issues/2331 | 879,031,427 | MDU6SXNzdWU4NzkwMzE0Mjc= | 2,331 | Add Topical-Chat | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 0 | 2021-05-07T13:43:59Z | 2021-05-07T13:43:59Z | null | null | ## Adding a Dataset
- **Name:** Topical-Chat
- **Description:** a knowledge-grounded human-human conversation dataset where the underlying knowledge spans 8 broad topics and conversation partners don’t have explicitly defined roles
- **Paper:** https://www.isca-speech.org/archive/Interspeech_2019/pdfs/3079.pdf
- **Data:** https://github.com/alexa/Topical-Chat
- **Motivation:** Good quality, knowledge-grounded dataset that spans a broad range of topics
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2331/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/252/comments | https://api.github.com/repos/huggingface/datasets/issues/252/events | https://github.com/huggingface/datasets/issues/252 | 634,563,239 | MDU6SXNzdWU2MzQ1NjMyMzk= | 252 | NonMatchingSplitsSizesError error when reading the IMDB dataset | [] | closed | false | null | 4 | 2020-06-08T12:26:24Z | 2021-08-27T15:20:58Z | 2020-06-08T14:01:26Z | null | Hi!
I am trying to load the `imdb` dataset with this line:
`dataset = nlp.load_dataset('imdb', data_dir='/A/PATH', cache_dir='/A/PATH')`
but I am getting the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/load.py", line 517, in load_dataset
save_infos=save_infos,
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 363, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/builder.py", line 421, in _download_and_prepare
verify_splits(self.info.splits, split_dict)
File "/mounts/Users/cisintern/antmarakis/anaconda3/lib/python3.7/site-packages/nlp/utils/info_utils.py", line 70, in verify_splits
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=33442202, num_examples=25000, dataset_name='imdb'), 'recorded': SplitInfo(name='train', num_bytes=5929447, num_examples=4537, dataset_name='imdb')}, {'expected': SplitInfo(name='unsupervised', num_bytes=67125548, num_examples=50000, dataset_name='imdb'), 'recorded': SplitInfo(name='unsupervised', num_bytes=0, num_examples=0, dataset_name='imdb')}]
```
Am I overlooking something? Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/252/timeline | null | completed | null | null | false | [
"I just tried on my side and I didn't encounter your problem.\r\nApparently the script doesn't generate all the examples on your side.\r\n\r\nCan you provide the version of `nlp` you're using ?\r\nCan you try to clear your cache and re-run the code ?",
"I updated it, that was it, thanks!",
"Hello, I am facing t... |
https://api.github.com/repos/huggingface/datasets/issues/2125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2125/comments | https://api.github.com/repos/huggingface/datasets/issues/2125/events | https://github.com/huggingface/datasets/issues/2125 | 842,690,570 | MDU6SXNzdWU4NDI2OTA1NzA= | 2,125 | Is dataset timit_asr broken? | [] | closed | false | null | 2 | 2021-03-28T08:30:18Z | 2021-03-28T12:29:25Z | 2021-03-28T12:29:25Z | null | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_examples=10):
assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
picks = []
for _ in range(num_examples):
pick = random.randint(0, len(dataset)-1)
while pick in picks:
pick = random.randint(0, len(dataset)-1)
picks.append(pick)
df = pd.DataFrame(dataset[picks])
display(HTML(df.to_html()))
show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id",
"sentence_type", "speaker_id"]), num_examples=20)
```
`output`
<img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png">
I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem.
<img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2125/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ",
"@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem."
] |
https://api.github.com/repos/huggingface/datasets/issues/5402 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5402/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5402/comments | https://api.github.com/repos/huggingface/datasets/issues/5402/events | https://github.com/huggingface/datasets/issues/5402 | 1,517,409,429 | I_kwDODunzps5acdSV | 5,402 | Missing state.json when creating a cloud dataset using a dataset_builder | [] | open | false | null | 3 | 2023-01-03T13:39:59Z | 2023-01-04T17:23:57Z | null | null | ### Describe the bug
Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_datase, load_dataset_builder
import s3fs
storage_options = {"session": Session()}
fs = s3fs.S3FileSystem(**storage_options)
output_dir = "s3://bucket/imdb"
builder = load_dataset_builder("imdb")
builder.download_and_prepare(output_dir, storage_options=storage_options)
load_from_disk(output_dir, fs=fs) # ERROR
# [Errno 2] No such file or directory: '/tmp/tmpy22yys8o/bucket/imdb/state.json'
```
As a comparison, if you use the non lazy `load_dataset`, it works and the S3 folder has different structure + state.json files. Example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_dataset, load_dataset_builder
import s3fs
storage_options = {"session": Session()}
fs = s3fs.S3FileSystem(**storage_options)
output_dir = "s3://bucket/imdb"
dataset = load_dataset("imdb",)
dataset.save_to_disk(output_dir, fs=fs)
load_from_disk(output_dir, fs=fs) # WORKS
```
You still want the 1st option for the laziness and the parquet conversion. Thanks!
### Steps to reproduce the bug
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_datase, load_dataset_builder
import s3fs
storage_options = {"session": Session()}
fs = s3fs.S3FileSystem(**storage_options)
output_dir = "s3://bucket/imdb"
builder = load_dataset_builder("imdb")
builder.download_and_prepare(output_dir, storage_options=storage_options)
load_from_disk(output_dir, fs=fs) # ERROR
# [Errno 2] No such file or directory: '/tmp/tmpy22yys8o/bucket/imdb/state.json'
```
BTW, you need the AioSession as s3fs is now based on aiobotocore, see https://github.com/fsspec/s3fs/issues/385.
### Expected behavior
Expected to be able to load the dataset from S3.
### Environment info
```
s3fs 2022.11.0
s3transfer 0.6.0
datasets 2.8.0
aiobotocore 2.4.2
boto3 1.24.59
botocore 1.27.59
```
python 3.7.15. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5402/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5402/timeline | null | null | null | null | false | [
"`load_from_disk` must be used on datasets saved using `save_to_disk`: they correspond to fully serialized datasets including their state.\r\n\r\nOn the other hand, `download_and_prepare` just downloads the raw data and convert them to arrow (or parquet if you want). We are working on allowing you to reload a datas... |
https://api.github.com/repos/huggingface/datasets/issues/3041 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3041/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3041/comments | https://api.github.com/repos/huggingface/datasets/issues/3041/events | https://github.com/huggingface/datasets/pull/3041 | 1,018,911,385 | PR_kwDODunzps4s1ZAc | 3,041 | Load private data files + use glob on ZIP archives for json/csv/etc. module inference | [] | closed | false | null | 4 | 2021-10-06T18:16:36Z | 2021-10-12T15:25:48Z | 2021-10-12T15:25:46Z | null | As mentioned in https://github.com/huggingface/datasets/issues/3032 loading data files from private repository isn't working correctly because of the data files resolved.
#2986 did a refactor of the data files resolver. I added authentication to it.
I also improved it to glob inside ZIP archives to look for json/csv/etc. files and infer which dataset builder (json/csv/etc.) to use.
Fix https://github.com/huggingface/datasets/issues/3032
Note that #2986 needs to get merged first | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3041/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3041/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3041.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3041",
"merged_at": "2021-10-12T15:25:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3041.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3041"
} | true | [
"I have an error on windows:\r\n```python\r\naiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host moon-staging.huggingface.co:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: certificate has expired (_ssl.c:1131)')]\r\n```\r\nat th... |
https://api.github.com/repos/huggingface/datasets/issues/2627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2627/comments | https://api.github.com/repos/huggingface/datasets/issues/2627/events | https://github.com/huggingface/datasets/pull/2627 | 941,503,349 | MDExOlB1bGxSZXF1ZXN0Njg3MzczMDg1 | 2,627 | Minor fix tests with Windows paths | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-11T17:55:48Z | 2021-07-12T14:08:47Z | 2021-07-12T08:34:50Z | null | Minor fix tests with Windows paths. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2627/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2627/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2627.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2627",
"merged_at": "2021-07-12T08:34:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2627.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2627"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4137/comments | https://api.github.com/repos/huggingface/datasets/issues/4137/events | https://github.com/huggingface/datasets/pull/4137 | 1,199,000,453 | PR_kwDODunzps419D6A | 4,137 | Add single dataset citations for TweetEval | [] | closed | false | null | 2 | 2022-04-10T11:51:54Z | 2022-04-12T07:57:22Z | 2022-04-12T07:51:15Z | null | This PR adds single data citations as per request of the original creators of the TweetEval dataset.
This is a recent email from the creator:
> Could I ask you a favor? Would you be able to add at the end of the README the citations of the single datasets as well? You can just copy our readme maybe? https://github.com/cardiffnlp/tweeteval#citing-tweeteval
(just to be sure that the creator of the single datasets also get credits when tweeteval is used)
Please let me know if this looks okay or if any changes are needed.
Thanks,
Gunjan
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4137/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4137",
"merged_at": "2022-04-12T07:51:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4137"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The `test_dataset_cards` method is failing with the error:\r\n\r\n```\r\nif error_messages:\r\n> raise ValueError(\"\\n\".join(error_messages))\r\nE ValueError: The following issues have been found in the dataset ... |
https://api.github.com/repos/huggingface/datasets/issues/3638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3638/comments | https://api.github.com/repos/huggingface/datasets/issues/3638/events | https://github.com/huggingface/datasets/issues/3638 | 1,115,725,703 | I_kwDODunzps5CgJ-H | 3,638 | AutoTokenizer hash value got change after datasets.map | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 11 | 2022-01-27T03:19:03Z | 2022-08-26T07:47:56Z | null | null | ## Describe the bug
AutoTokenizer hash value got change after datasets.map
## Steps to reproduce the bug
1. trash huggingface datasets cache
2. run the following code:
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
raw_datasets = load_dataset("glue", "mrpc")
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
```
got
```
Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1112.35it/s]
f4976bb4694ebc51
3fca35a1fd4a1251
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 6.96ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 15.25ba/s]
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 5.81ba/s]
d32837619b7d7d01
5fd925c82edd62b6
```
3. run raw_datasets.map(tokenize_function, batched=True) again and see some dataset are not using cache.
## Expected results
`AutoTokenizer` work like specific Tokenizer (The hash value don't change after map):
```python
from transformers import AutoTokenizer, BertTokenizer
from datasets import load_dataset
from datasets.fingerprint import Hasher
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
def tokenize_function(example):
return tokenizer(example["sentence1"], example["sentence2"], truncation=True)
raw_datasets = load_dataset("glue", "mrpc")
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
tokenized_datasets = raw_datasets.map(tokenize_function, batched=True)
print(Hasher.hash(tokenize_function))
print(Hasher.hash(tokenizer))
```
```
Reusing dataset glue (/home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad)
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1091.22it/s]
46d4b31f54153fc7
5b8771afd8d43888
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-6b07ff82ae9d5c51.arrow
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-af738a6d84f3864b.arrow
Loading cached processed dataset at /home1/wts/.cache/huggingface/datasets/glue/mrpc/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad/cache-531d2a603ba713c1.arrow
46d4b31f54153fc7
5b8771afd8d43888
```
## Environment info
- `datasets` version: 1.18.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.27
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3638/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3638/timeline | null | null | null | null | false | [
"This issue was original reported at https://github.com/huggingface/transformers/issues/14931 and It seems like this issue also occur with other AutoClass like AutoFeatureExtractor.",
"Thanks for moving the issue here !\r\n\r\nI wasn't able to reproduce the issue on my env (the hashes stay the same):\r\n```\r\n- ... |
https://api.github.com/repos/huggingface/datasets/issues/221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/221/comments | https://api.github.com/repos/huggingface/datasets/issues/221/events | https://github.com/huggingface/datasets/pull/221 | 627,300,648 | MDExOlB1bGxSZXF1ZXN0NDI1MTI5OTc0 | 221 | Fix tests/test_dataset_common.py | [] | closed | false | null | 1 | 2020-05-29T14:12:15Z | 2020-06-01T12:20:42Z | 2020-05-29T15:02:23Z | null | When I run the command `RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_arcd` while working on #220. I get the error ` unexpected keyword argument "'download_and_prepare_kwargs'"` at the level of `load_dataset`. Indeed, this [function](https://github.com/huggingface/nlp/blob/master/src/nlp/load.py#L441) no longer has the argument `download_and_prepare_kwargs` but rather `download_config`. So here I change the tests accordingly. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/221/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/221/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/221.diff",
"html_url": "https://github.com/huggingface/datasets/pull/221",
"merged_at": "2020-05-29T15:02:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/221.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/221"
} | true | [
"Thanks ! Good catch :)\r\n\r\nTo fix the CI you can do:\r\n1 - rebase from master\r\n2 - then run `make style` as specified in [CONTRIBUTING.md](https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md) ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3567/comments | https://api.github.com/repos/huggingface/datasets/issues/3567/events | https://github.com/huggingface/datasets/pull/3567 | 1,100,296,696 | PR_kwDODunzps4w2xDl | 3,567 | Fix push to hub to allow individual split push | [] | closed | false | null | 1 | 2022-01-12T12:42:58Z | 2022-07-27T12:11:12Z | 2022-07-27T12:11:11Z | null | # Description of the issue
If one decides to push a split on a datasets repo, he uploads the dataset and overrides the config. However previous config splits end up being lost despite still having the dataset necessary.
The new flow is the following:
- query the old config from the repo
- update into a new config (add/overwrite new split for example)
- push the new config
# Side fix
- `repo_id` in HfFileSystem was wrongly typed.
- I've added `indent=2` as it becomes much easier to read now.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3567/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3567",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3567"
} | true | [
"This has been addressed in https://github.com/huggingface/datasets/pull/4415. Closing."
] |
https://api.github.com/repos/huggingface/datasets/issues/3023 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3023/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3023/comments | https://api.github.com/repos/huggingface/datasets/issues/3023/events | https://github.com/huggingface/datasets/pull/3023 | 1,015,923,031 | PR_kwDODunzps4srQ4i | 3,023 | Fix typo | [] | closed | false | null | 0 | 2021-10-05T06:06:11Z | 2021-10-05T11:56:55Z | 2021-10-05T11:56:55Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3023/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3023/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3023.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3023",
"merged_at": "2021-10-05T11:56:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3023.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3023"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/459/comments | https://api.github.com/repos/huggingface/datasets/issues/459/events | https://github.com/huggingface/datasets/pull/459 | 669,545,437 | MDExOlB1bGxSZXF1ZXN0NDU5OTAxMjEy | 459 | [Breaking] Update Dataset and DatasetDict API | [] | closed | false | null | 0 | 2020-07-31T08:11:33Z | 2020-08-26T08:28:36Z | 2020-08-26T08:28:35Z | null | This PR contains a few breaking changes so it's probably good to keep it for the next (major) release:
- rename the `flatten`, `drop` and `dictionary_encode_column` methods in `flatten_`, `drop_` and `dictionary_encode_column_` to indicate that these methods have in-place effects as discussed in #166. From now on we should keep the convention of having a trailing underscore for methods which have an in-place effet. I also adopt the conversion of not returning the (self) dataset for these methods. This is different than what PyTorch does for instance (`model.to()` is in-place but return the self model) but I feel like it's a safer approach in terms of UX.
- remove the `dataset.columns` property which returns a low-level Apache Arrow object and should not be used by users. Similarly, remove `dataset. nbytes` which we don't really want to expose in this bare-bone format.
- add a few more properties and methods to `DatasetDict` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/459/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/459.diff",
"html_url": "https://github.com/huggingface/datasets/pull/459",
"merged_at": "2020-08-26T08:28:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/459.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/459"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3150/comments | https://api.github.com/repos/huggingface/datasets/issues/3150/events | https://github.com/huggingface/datasets/issues/3150 | 1,033,831,530 | I_kwDODunzps49nwRq | 3,150 | Faiss _is_ available on Windows | [] | closed | false | null | 1 | 2021-10-22T18:07:16Z | 2021-11-02T10:06:03Z | 2021-11-02T10:06:03Z | null | In the setup file, I find the following:
https://github.com/huggingface/datasets/blob/87c71b9c29a40958973004910f97e4892559dfed/setup.py#L171
However, FAISS does install perfectly fine on Windows on my system. You can also confirm this on the [PyPi page](https://pypi.org/project/faiss-cpu/#files), where Windows wheels are available. Maybe this was true for older versions? For current versions, this can be removed I think.
(This isn't really a bug but didn't know how else to tag.)
If you agree I can do a quick PR and remove that line. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3150/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3150/timeline | null | completed | null | null | false | [
"Sure, feel free to open a PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/264 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/264/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/264/comments | https://api.github.com/repos/huggingface/datasets/issues/264/events | https://github.com/huggingface/datasets/pull/264 | 637,106,170 | MDExOlB1bGxSZXF1ZXN0NDMzMTU0ODQ4 | 264 | Fix small issues creating dataset | [] | closed | false | null | 0 | 2020-06-11T15:20:16Z | 2020-06-12T08:15:57Z | 2020-06-12T08:15:56Z | null | Fix many small issues mentioned in #249:
- don't force to install apache beam for commands
- fix None cache dir when using `dl_manager.download_custom`
- added new extras in `setup.py` named `dev` that contains tests and quality dependencies
- mock dataset sizes when running tests with dummy data
- add a note about the naming convention of datasets (camel case - snake case) in CONTRIBUTING.md
This should help users create their datasets.
Next step is the `add_dataset.md` docs :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/264/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/264/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/264.diff",
"html_url": "https://github.com/huggingface/datasets/pull/264",
"merged_at": "2020-06-12T08:15:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/264.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/264"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3586 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3586/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3586/comments | https://api.github.com/repos/huggingface/datasets/issues/3586/events | https://github.com/huggingface/datasets/issues/3586 | 1,106,455,672 | I_kwDODunzps5B8yx4 | 3,586 | Revisit `enable/disable_` toggle function prefix | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2022-01-18T04:09:55Z | 2022-03-14T15:01:08Z | 2022-03-14T15:01:08Z | null | As discussed in https://github.com/huggingface/transformers/pull/15167, we should revisit the `enable/disable_` toggle function prefix, potentially in favor of `set_enabled_`. Concretely, this translates to
- De-deprecating `disable_progress_bar()`
- Adding `enable_progress_bar()`
- On the caching side, adding `enable_caching` and `disable_caching`
Additional decisions have to be made with regards to the existing `set_enabled_X` functions; that is, whether to keep them as is or deprecate them in favor of the aforementioned functions.
cc @mariosasko @lhoestq | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3586/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3586/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1227/comments | https://api.github.com/repos/huggingface/datasets/issues/1227/events | https://github.com/huggingface/datasets/pull/1227 | 758,049,060 | MDExOlB1bGxSZXF1ZXN0NTMzMjg1ODIx | 1,227 | readme: remove link to Google's responsible AI practices | [] | closed | false | null | 0 | 2020-12-06T23:17:22Z | 2020-12-07T08:35:19Z | 2020-12-06T23:20:41Z | null | ...maybe we'll find a company that reallly stands behind responsible AI practices ;) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1227/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1227.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1227",
"merged_at": "2020-12-06T23:20:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1227.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1227"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1580/comments | https://api.github.com/repos/huggingface/datasets/issues/1580/events | https://github.com/huggingface/datasets/pull/1580 | 768,111,377 | MDExOlB1bGxSZXF1ZXN0NTQwNjQxNDQ3 | 1,580 | made suggested changes in diplomacy_detection.py | [] | closed | false | null | 0 | 2020-12-15T19:52:00Z | 2020-12-16T10:27:52Z | 2020-12-16T10:27:52Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1580/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1580.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1580",
"merged_at": "2020-12-16T10:27:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1580.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1580"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/792/comments | https://api.github.com/repos/huggingface/datasets/issues/792/events | https://github.com/huggingface/datasets/issues/792 | 734,693,652 | MDU6SXNzdWU3MzQ2OTM2NTI= | 792 | KILT dataset: empty string in triviaqa input field | [] | closed | false | null | 1 | 2020-11-02T17:33:54Z | 2020-11-05T10:34:59Z | 2020-11-05T10:34:59Z | null | # What happened
Both train and test splits of the triviaqa dataset (part of the KILT benchmark) seem to have empty string in their input field (unlike the natural questions dataset, part of the same benchmark)
# Versions
KILT version is `1.0.0`
`datasets` version is `1.1.2`
[more here](https://gist.github.com/PaulLerner/3768c8d25f723edbac20d99b6a4056c1)
# How to reproduce
```py
In [1]: from datasets import load_dataset
In [4]: dataset = load_dataset("kilt_tasks")
# everything works fine, removed output for a better readibility
Dataset kilt_tasks downloaded and prepared to /people/lerner/.cache/huggingface/datasets/kilt_tasks/all_tasks/1.0.0/821c4295a2c35db2847585918d9c47d7f028f1a26b78825d8e77cd3aeb2621a1. Subsequent calls will reuse this data.
# empty string in triviaqa input field
In [36]: dataset['train_triviaqa'][0]
Out[36]:
{'id': 'dpql_5197',
'input': '',
'meta': {'left_context': '',
'mention': '',
'obj_surface': {'text': []},
'partial_evidence': {'end_paragraph_id': [],
'meta': [],
'section': [],
'start_paragraph_id': [],
'title': [],
'wikipedia_id': []},
'right_context': '',
'sub_surface': {'text': []},
'subj_aliases': {'text': []},
'template_questions': {'text': []}},
'output': {'answer': ['five £', '5 £', '£5', 'five £'],
'meta': [],
'provenance': [{'bleu_score': [1.0],
'end_character': [248],
'end_paragraph_id': [30],
'meta': [],
'section': ['Section::::Question of legal tender.\n'],
'start_character': [246],
'start_paragraph_id': [30],
'title': ['Banknotes of the pound sterling'],
'wikipedia_id': ['270680']}]}}
In [35]: dataset['train_triviaqa']['input'][:10]
Out[35]: ['', '', '', '', '', '', '', '', '', '']
# same with test set
In [37]: dataset['test_triviaqa']['input'][:10]
Out[37]: ['', '', '', '', '', '', '', '', '', '']
# works fine with natural questions
In [34]: dataset['train_nq']['input'][:10]
Out[34]:
['how i.met your mother who is the mother',
'who had the most wins in the nfl',
'who played mantis guardians of the galaxy 2',
'what channel is the premier league on in france',
"god's not dead a light in the darkness release date",
'who is the current president of un general assembly',
'when do the eclipse supposed to take place',
'what is the name of the sea surrounding dubai',
'who holds the nba record for most points in a career',
'when did the new maze runner movie come out']
```
Stay safe :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/792/timeline | null | completed | null | null | false | [
"Just found out about https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md\r\n(Not very clear in https://huggingface.co/datasets/kilt_tasks links to http://github.com/huggingface/datasets/datasets/kilt_tasks/README.md which is dead, closing the issue though :))"
] |
https://api.github.com/repos/huggingface/datasets/issues/2086 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2086/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2086/comments | https://api.github.com/repos/huggingface/datasets/issues/2086/events | https://github.com/huggingface/datasets/pull/2086 | 836,249,587 | MDExOlB1bGxSZXF1ZXN0NTk2Nzg0Mjcz | 2,086 | change user permissions to -rw-r--r-- | [] | closed | false | null | 1 | 2021-03-19T18:14:56Z | 2021-03-24T13:59:04Z | 2021-03-24T13:59:04Z | null | Fix for #2065 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2086/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2086/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2086.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2086",
"merged_at": "2021-03-24T13:59:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2086.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2086"
} | true | [
"I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1024 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1024/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1024/comments | https://api.github.com/repos/huggingface/datasets/issues/1024/events | https://github.com/huggingface/datasets/pull/1024 | 755,664,113 | MDExOlB1bGxSZXF1ZXN0NTMxMzMzOTc5 | 1,024 | Add ZEST: ZEroShot learning from Task descriptions | [] | closed | false | null | 1 | 2020-12-02T22:41:20Z | 2020-12-03T19:21:00Z | 2020-12-03T16:09:15Z | null | Adds the ZEST dataset on zero-shot learning from task descriptions from AI2.
- Webpage: https://allenai.org/data/zest
- Paper: https://arxiv.org/abs/2011.08115
The nature of this dataset made the supported task tags tricky if you wouldn't mind giving any feedback @yjernite. Also let me know if you think we should have a `other-task-generalization` or something like that... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1024/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1024/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1024.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1024",
"merged_at": "2020-12-03T16:09:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1024.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1024"
} | true | [
"Looks good to me, we can ping the authors for more info later. And yes apply `other-task` labels liberally, we can sort them out later :) \r\n\r\nLooks ready to merge when you're ready @joeddav "
] |
https://api.github.com/repos/huggingface/datasets/issues/1129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1129/comments | https://api.github.com/repos/huggingface/datasets/issues/1129/events | https://github.com/huggingface/datasets/pull/1129 | 757,255,492 | MDExOlB1bGxSZXF1ZXN0NTMyNjYxNzM2 | 1,129 | Adding initial version of cord-19 dataset | [] | closed | false | null | 5 | 2020-12-04T17:03:17Z | 2021-02-09T10:22:35Z | 2021-02-09T10:18:06Z | null | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class.
- [x] Generate the metadata file dataset_infos.json for all configurations
- [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB)
- [x] Add the dataset card README.md using the template and at least fill the tags
- [x] Both tests for the real data and the dummy data pass.
### TODO:
- [x] add more metadata
- [x] add full text
- [x] add pre-computed document embedding | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1129/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1129.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1129",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1129.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1129"
} | true | [
"Hi @ggdupont !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or when you're ready for a review",
"> Hi @ggdupont !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or when you're ready for a r... |
https://api.github.com/repos/huggingface/datasets/issues/2099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2099/comments | https://api.github.com/repos/huggingface/datasets/issues/2099/events | https://github.com/huggingface/datasets/issues/2099 | 838,523,819 | MDU6SXNzdWU4Mzg1MjM4MTk= | 2,099 | load_from_disk takes a long time to load local dataset | [] | closed | false | null | 8 | 2021-03-23T09:28:37Z | 2021-03-23T17:12:16Z | 2021-03-23T17:12:16Z | null | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though).
Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers?
Tagging @lhoestq since you seem to be working on these issues and PRs :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2099/timeline | null | completed | null | null | false | [
"Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?",
"It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a... |
https://api.github.com/repos/huggingface/datasets/issues/4766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4766/comments | https://api.github.com/repos/huggingface/datasets/issues/4766/events | https://github.com/huggingface/datasets/issues/4766 | 1,321,809,380 | I_kwDODunzps5OyTXk | 4,766 | Dataset Viewer issue for openclimatefix/goes-mrms | [] | open | false | null | 1 | 2022-07-29T06:17:14Z | 2022-07-29T08:43:58Z | null | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4766/timeline | null | null | null | null | false | [
"Thanks for reporting, @cheaterHy.\r\n\r\nThe cause of this issue is a misalignment between the names of the repo (`goes-mrms`, with hyphen) and its Python loading scrip file (`goes_mrms.py`, with underscore).\r\n\r\nI've opened an Issue discussion in their repo: https://huggingface.co/datasets/openclimatefix/goes-... |
https://api.github.com/repos/huggingface/datasets/issues/783 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/783/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/783/comments | https://api.github.com/repos/huggingface/datasets/issues/783/events | https://github.com/huggingface/datasets/pull/783 | 733,536,254 | MDExOlB1bGxSZXF1ZXN0NTEzMzAwODUz | 783 | updated links to v1.3 of quail, fixed the description | [] | closed | false | null | 1 | 2020-10-30T21:47:33Z | 2020-11-29T23:05:19Z | 2020-11-29T23:05:18Z | null | updated links to v1.3 of quail, fixed the description | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/783/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/783/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/783.diff",
"html_url": "https://github.com/huggingface/datasets/pull/783",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/783.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/783"
} | true | [
"we're using quail 1.3 now thanks.\r\nclosing this one"
] |
https://api.github.com/repos/huggingface/datasets/issues/1226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1226/comments | https://api.github.com/repos/huggingface/datasets/issues/1226/events | https://github.com/huggingface/datasets/pull/1226 | 758,036,979 | MDExOlB1bGxSZXF1ZXN0NTMzMjc2OTU3 | 1,226 | Add menyo_20k_mt dataset | [] | closed | false | null | 2 | 2020-12-06T22:16:15Z | 2020-12-10T19:22:14Z | 2020-12-10T19:22:14Z | null | Add menyo_20k_mt dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1226/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1226/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1226.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1226",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1226.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1226"
} | true | [
"looks like your PR includes changes about many other files than the ones for menyo 20k mt\r\nCan you create another branch and another PR please ?",
"Yes, I will"
] |
https://api.github.com/repos/huggingface/datasets/issues/4634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4634/comments | https://api.github.com/repos/huggingface/datasets/issues/4634/events | https://github.com/huggingface/datasets/issues/4634 | 1,294,405,251 | I_kwDODunzps5NJw6D | 4,634 | Can't load the Hausa audio dataset | [] | closed | false | null | 1 | 2022-07-05T14:47:36Z | 2022-09-13T14:07:32Z | 2022-09-13T14:07:32Z | null | common_voice_train = load_dataset("common_voice", "ha", split="train+validation") | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4634/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4634/timeline | null | completed | null | null | false | [
"Could you provide the error details. It is difficult to debug otherwise. Also try other config. `ha` is not a valid."
] |
https://api.github.com/repos/huggingface/datasets/issues/119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/119/comments | https://api.github.com/repos/huggingface/datasets/issues/119/events | https://github.com/huggingface/datasets/issues/119 | 618,652,145 | MDU6SXNzdWU2MTg2NTIxNDU= | 119 | 🐛 Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | [] | closed | false | null | 2 | 2020-05-15T02:27:26Z | 2020-05-15T05:11:22Z | 2020-05-15T02:45:28Z | null | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/119/timeline | null | completed | null | null | false | [
"It's strange, after installing `nlp` on Colab, the `pyarrow` version seems fine from `pip` but not from python :\r\n\r\n```python\r\nimport pyarrow\r\n\r\n!pip show pyarrow\r\nprint(\"version = {}\".format(pyarrow.__version__))\r\n```\r\n\r\n> Name: pyarrow\r\nVersion: 0.17.0\r\nSummary: Python library for Apache ... |
https://api.github.com/repos/huggingface/datasets/issues/2703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2703/comments | https://api.github.com/repos/huggingface/datasets/issues/2703/events | https://github.com/huggingface/datasets/issues/2703 | 950,482,284 | MDU6SXNzdWU5NTA0ODIyODQ= | 2,703 | Bad message when config name is missing | [] | closed | false | null | 0 | 2021-07-22T09:47:23Z | 2021-07-22T10:02:40Z | 2021-07-22T10:02:40Z | null | When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name.
However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message:
```python
import datasets
datasets.load_dataset("glue")
```
raises
```python
AttributeError: 'BuilderConfig' object has no attribute 'text_features'
```
instead of
```python
ValueError: Config name is missing.
Please pick one among the available configs: ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'mnli_mismatched', 'mnli_matched', 'qnli', 'rte', 'wnli', 'ax']
Example of usage:
`load_dataset('glue', 'cola')`
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2703/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2703/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5786 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5786/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5786/comments | https://api.github.com/repos/huggingface/datasets/issues/5786/events | https://github.com/huggingface/datasets/issues/5786 | 1,680,957,070 | I_kwDODunzps5kMV6O | 5,786 | Multiprocessing in a `filter` or `map` function with a Pytorch model | [] | closed | false | null | 5 | 2023-04-24T10:38:07Z | 2023-05-30T09:56:30Z | 2023-04-24T10:43:58Z | null | ### Describe the bug
I am trying to use a Pytorch model loaded on CPUs with multiple processes with a `.map` or a `.filter` method.
Usually, when dealing with models that are non-pickable, creating a class such that the `map` function is the method `__call__`, and adding `reduce` helps to solve the problem.
However, here, the command hangs without throwing an error.
### Steps to reproduce the bug
```
from datasets import Dataset
import torch
from torch import nn
from torchvision import models
class FilterFunction:
#__slots__ = ("path_model", "model") # Doesn't change anything uncommented
def __init__(self, path_model):
self.path_model = path_model
model = models.resnet50()
model.fc = nn.Sequential(
nn.Linear(2048, 512),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(512, 10),
nn.LogSoftmax(dim=1)
)
model.load_state_dict(torch.load(path_model, map_location=torch.device("cpu")))
model.eval()
self.model = model
def __call__(self, batch):
return [True] * len(batch["id"])
# Comment this to have an error
def __reduce__(self):
return (self.__class__, (self.path_model,))
dataset = Dataset.from_dict({"id": [0, 1, 2, 4]})
# Download (100 MB) at https://github.com/emiliantolo/pytorch_nsfw_model/raw/master/ResNet50_nsfw_model.pth
path_model = "/fsx/hugo/nsfw_image/ResNet50_nsfw_model.pth"
filter_function = FilterFunction(path_model=path_model)
# Works
filtered_dataset = dataset.filter(filter_function, num_proc=1, batched=True, batch_size=2)
# Doesn't work
filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)
```
### Expected behavior
The command `filtered_dataset = dataset.filter(filter_function, num_proc=2, batched=True, batch_size=2)` should work and not hang.
### Environment info
Datasets: 2.11.0
Pyarrow: 11.0.0
Ubuntu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5786/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5786/timeline | null | completed | null | null | false | [
"Hi ! PyTorch may hang when calling `load_state_dict()` in a subprocess. To fix that, set the multiprocessing start method to \"spawn\". Since `datasets` uses `multiprocess`, you should do:\r\n\r\n```python\r\n# Required to avoid issues with pytorch (otherwise hangs during load_state_dict in multiprocessing)\r\nimp... |
https://api.github.com/repos/huggingface/datasets/issues/4161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4161/comments | https://api.github.com/repos/huggingface/datasets/issues/4161/events | https://github.com/huggingface/datasets/pull/4161 | 1,203,230,485 | PR_kwDODunzps42LEhi | 4,161 | Add Visual Genome | [] | closed | false | null | 4 | 2022-04-13T12:25:24Z | 2022-04-21T15:42:49Z | 2022-04-21T13:08:52Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4161/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4161/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4161.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4161",
"merged_at": "2022-04-21T13:08:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4161.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4161"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hum there seems to be some issues with tasks in test:\r\n - some tasks don't fit anything in `tasks.json`. Do I remove them in `task_categories`?\r\n - some tasks should exist, typically `visual-question-answering` (https://github.co... |
https://api.github.com/repos/huggingface/datasets/issues/4598 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4598/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4598/comments | https://api.github.com/repos/huggingface/datasets/issues/4598/events | https://github.com/huggingface/datasets/pull/4598 | 1,288,774,514 | PR_kwDODunzps46kfOS | 4,598 | Host financial_phrasebank data on the Hub | [] | closed | false | null | 1 | 2022-06-29T13:59:31Z | 2022-07-01T09:41:14Z | 2022-07-01T09:29:36Z | null |
Fix #4597. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4598/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4598/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4598.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4598",
"merged_at": "2022-07-01T09:29:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4598.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4598"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3964 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3964/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3964/comments | https://api.github.com/repos/huggingface/datasets/issues/3964/events | https://github.com/huggingface/datasets/issues/3964 | 1,173,564,993 | I_kwDODunzps5F8y5B | 3,964 | Add default Audio Loader | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2022-03-18T12:58:55Z | 2022-08-22T14:20:46Z | 2022-08-22T14:20:46Z | null | **Is your feature request related to a problem? Please describe.**
Writing a custom loading dataset script might be a bit challenging for users.
**Describe the solution you'd like**
Add default Audio loader (analogous to ImageFolder) for small datasets with standard directory structure.
**Describe alternatives you've considered**
Create a custom loading script? that's what users doing now.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3964/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3964/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1686/comments | https://api.github.com/repos/huggingface/datasets/issues/1686/events | https://github.com/huggingface/datasets/issues/1686 | 778,921,684 | MDU6SXNzdWU3Nzg5MjE2ODQ= | 1,686 | Dataset Error: DaNE contains empty samples at the end | [] | closed | false | null | 3 | 2021-01-05T11:54:26Z | 2021-01-05T14:01:09Z | 2021-01-05T14:00:13Z | null | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}
>>> dataset["train"][-1]
{'dep_ids': [], 'dep_labels': [], 'lemmas': [], 'morph_tags': [], 'ner_tags': [], 'pos_tags': [], 'sent_id': '', 'text': '', 'tok_ids': [], 'tokens': []}
```
Best,
Kenneth | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1686/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1686/timeline | null | completed | null | null | false | [
"Thanks for reporting, I opened a PR to fix that",
"One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\... |
https://api.github.com/repos/huggingface/datasets/issues/1075 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1075/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1075/comments | https://api.github.com/repos/huggingface/datasets/issues/1075/events | https://github.com/huggingface/datasets/pull/1075 | 756,501,235 | MDExOlB1bGxSZXF1ZXN0NTMyMDM4ODg1 | 1,075 | adding cleaned verion of E2E NLG | [] | closed | false | null | 0 | 2020-12-03T19:21:07Z | 2020-12-03T19:43:56Z | 2020-12-03T19:43:56Z | null | Found at: https://github.com/tuetschek/e2e-cleaning | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1075/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1075/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1075.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1075",
"merged_at": "2020-12-03T19:43:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1075.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1075"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4984 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4984/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4984/comments | https://api.github.com/repos/huggingface/datasets/issues/4984/events | https://github.com/huggingface/datasets/pull/4984 | 1,375,690,330 | PR_kwDODunzps4_FhTm | 4,984 | docs: ✏️ add links to the Datasets API | [] | closed | false | null | 2 | 2022-09-16T09:34:12Z | 2022-09-16T13:10:14Z | 2022-09-16T13:07:33Z | null | I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.
I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm interested in ideas to integrate the API better in these docs without being too much. cc @lhoestq @julien-c @albertvillanova @stevhliu. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4984/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4984/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] |
https://api.github.com/repos/huggingface/datasets/issues/2880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2880/comments | https://api.github.com/repos/huggingface/datasets/issues/2880/events | https://github.com/huggingface/datasets/pull/2880 | 990,877,940 | MDExOlB1bGxSZXF1ZXN0NzI5NDIzMDMy | 2,880 | Extend support for streaming datasets that use pathlib.Path stem/suffix | [] | closed | false | null | 0 | 2021-09-08T08:42:43Z | 2021-09-09T13:13:29Z | 2021-09-09T13:13:29Z | null | This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`.
Related to #2876, #2874, #2866.
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2880/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2880.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2880",
"merged_at": "2021-09-09T13:13:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2880.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2880"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2739/comments | https://api.github.com/repos/huggingface/datasets/issues/2739/events | https://github.com/huggingface/datasets/pull/2739 | 957,751,260 | MDExOlB1bGxSZXF1ZXN0NzAxMTI0ODQ3 | 2,739 | Pass tokenize to sacrebleu only if explicitly passed by user | [] | closed | false | null | 0 | 2021-08-02T05:09:05Z | 2021-08-03T04:23:37Z | 2021-08-03T04:23:37Z | null | Next `sacrebleu` release (v2.0.0) will remove `sacrebleu.DEFAULT_TOKENIZER`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15
This PR passes `tokenize` to `sacrebleu` only if explicitly passed by the user, otherwise it will not pass it (and `sacrebleu` will use its default, no matter where it is and how it is called).
Close: #2737. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2739/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2739",
"merged_at": "2021-08-03T04:23:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2739"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5823 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5823/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5823/comments | https://api.github.com/repos/huggingface/datasets/issues/5823/events | https://github.com/huggingface/datasets/issues/5823 | 1,697,024,789 | I_kwDODunzps5lJosV | 5,823 | [2.12.0] DatasetDict.save_to_disk not saving to S3 | [] | closed | false | null | 2 | 2023-05-05T05:22:59Z | 2023-05-05T15:01:18Z | 2023-05-05T15:01:17Z | null | ### Describe the bug
When trying to save a `DatasetDict` to a private S3 bucket using `save_to_disk`, the artifacts are instead saved locally, and not in the S3 bucket.
I have tried using the deprecated `fs` as well as the `storage_options` arguments and I get the same results.
### Steps to reproduce the bug
1. Create a DatsetDict `dataset`
2. Create a S3FileSystem object
`s3 = datasets.filesystems.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key)`
3. Save using `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", storage_options=s3.storage_options)` or `dataset_dict.save_to_disk(f"{s3_bucket}/{s3_dir}/{dataset_name}", fs=s3)`
4. Check the corresponding S3 bucket and verify nothing has been uploaded
5. Check the path at f"{s3_bucket}/{s3_dir}/{dataset_name}" and verify that files have been saved there
### Expected behavior
Artifacts are uploaded at the f"{s3_bucket}/{s3_dir}/{dataset_name}" S3 location.
### Environment info
- `datasets` version: 2.12.0
- Platform: macOS-13.3.1-x86_64-i386-64bit
- Python version: 3.11.2
- Huggingface_hub version: 0.14.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5823/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5823/timeline | null | completed | null | null | false | [
"Hi ! Can you try adding the `s3://` prefix ?\r\n```python\r\nf\"s3://{s3_bucket}/{s3_dir}/{dataset_name}\"\r\n```",
"Ugh, yeah that was it. Thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4344/comments | https://api.github.com/repos/huggingface/datasets/issues/4344/events | https://github.com/huggingface/datasets/pull/4344 | 1,234,882,542 | PR_kwDODunzps43xFEn | 4,344 | Fix docstring in DatasetDict::shuffle | [] | closed | false | null | 0 | 2022-05-13T08:06:00Z | 2022-05-25T09:23:43Z | 2022-05-24T15:35:21Z | null | I think due to #1626, the docstring contained this error ever since `seed` was added. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4344/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4344.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4344",
"merged_at": "2022-05-24T15:35:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4344.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4344"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4133/comments | https://api.github.com/repos/huggingface/datasets/issues/4133/events | https://github.com/huggingface/datasets/issues/4133 | 1,197,830,623 | I_kwDODunzps5HZXHf | 4,133 | HANS dataset preview broken | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 3 | 2022-04-08T21:06:15Z | 2022-04-13T11:57:34Z | 2022-04-13T11:57:34Z | null | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4133/timeline | null | completed | null | null | false | [
"The dataset cannot be loaded, be it in normal or streaming mode.\r\n\r\n```python\r\n>>> import datasets\r\n>>> dataset=datasets.load_dataset(\"hans\", split=\"train\", streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/sles... |
https://api.github.com/repos/huggingface/datasets/issues/3060 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3060/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3060/comments | https://api.github.com/repos/huggingface/datasets/issues/3060/events | https://github.com/huggingface/datasets/issues/3060 | 1,022,936,396 | I_kwDODunzps48-MVM | 3,060 | load_dataset('openwebtext') yields "Compressed file ended before the end-of-stream marker was reached" | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-11T17:05:27Z | 2021-10-28T05:52:21Z | 2021-10-28T05:52:21Z | null | ## Describe the bug
When I try `load_dataset('openwebtext')`, I receive a "EOFError: Compressed file ended before the end-of-stream marker was reached" error.
## Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset('openwebtext')
```
## Expected results
I expect the `dataset` variable to be properly constructed.
## Actual results
```
File "/home/rschaef/CoCoSci-Language-Distillation/distillation_v2/ratchet_learning/tasks/base.py", line 37, in create_dataset
dataset_str,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/load.py", line 1117, in load_dataset
use_auth_token=use_auth_token,
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 637, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/builder.py", line 704, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/rschaef/.cache/huggingface/modules/datasets_modules/datasets/openwebtext/85b3ae7051d2d72e7c5fdf6dfb462603aaa26e9ed506202bf3a24d261c6c40a1/openwebtext.py", line 61, in _split_generators
dl_dir = dl_manager.download_and_extract(_URL)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 284, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/download_manager.py", line 261, in extract
partial(cached_path, download_config=download_config), path_or_paths, num_proc=num_proc, disable_tqdm=False
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/py_utils.py", line 197, in map_nested
return function(data_struct)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 316, in cached_path
output_path, force_extract=download_config.force_extract
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 40, in extract
self.extractor.extract(input_path, output_path, extractor=extractor)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 179, in extract
return extractor.extract(input_path, output_path)
File "/home/rschaef/CoCoSci-Language-Distillation/cocosci/lib/python3.6/site-packages/datasets/utils/extract.py", line 53, in extract
tar_file.extractall(output_path)
File "/usr/lib/python3.6/tarfile.py", line 2010, in extractall
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2052, in extract
numeric_owner=numeric_owner)
File "/usr/lib/python3.6/tarfile.py", line 2122, in _extract_member
self.makefile(tarinfo, targetpath)
File "/usr/lib/python3.6/tarfile.py", line 2171, in makefile
copyfileobj(source, target, tarinfo.size, ReadError, bufsize)
File "/usr/lib/python3.6/tarfile.py", line 249, in copyfileobj
buf = src.read(bufsize)
File "/usr/lib/python3.6/lzma.py", line 200, in read
return self._buffer.read(size)
File "/usr/lib/python3.6/_compression.py", line 68, in readinto
data = self.read(len(byte_view))
File "/usr/lib/python3.6/_compression.py", line 99, in read
raise EOFError("Compressed file ended before the "
python-BaseException
EOFError: Compressed file ended before the end-of-stream marker was reached
```
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-4.4.0-173-generic-x86_64-with-Ubuntu-16.04-xenial
- Python version: 3.6.10
- PyArrow version: 5.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3060/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3060/timeline | null | completed | null | null | false | [
"Hi @RylanSchaeffer, thanks for reporting.\r\n\r\nI'm sorry, but I was not able to reproduce your problem.\r\n\r\nNormally, the reason for this type of error is that, during your download of the data files, this was not fully complete.\r\n\r\nCould you please try to load the dataset again but forcing its redownload... |
https://api.github.com/repos/huggingface/datasets/issues/1528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1528/comments | https://api.github.com/repos/huggingface/datasets/issues/1528/events | https://github.com/huggingface/datasets/pull/1528 | 764,724,035 | MDExOlB1bGxSZXF1ZXN0NTM4NjU0ODU0 | 1,528 | initial commit for Common Crawl Domain Names | [] | closed | false | null | 1 | 2020-12-13T01:32:49Z | 2020-12-18T13:51:38Z | 2020-12-18T10:22:32Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1528/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1528.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1528",
"merged_at": "2020-12-18T10:22:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1528.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1528"
} | true | [
"Thank you :)"
] | |
https://api.github.com/repos/huggingface/datasets/issues/4341 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4341/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4341/comments | https://api.github.com/repos/huggingface/datasets/issues/4341/events | https://github.com/huggingface/datasets/issues/4341 | 1,234,739,703 | I_kwDODunzps5JmKH3 | 4,341 | Failing CI on Windows for sari and wiki_split metrics | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2022-05-13T04:55:17Z | 2022-05-13T05:47:41Z | 2022-05-13T05:47:41Z | null | ## Describe the bug
Our CI is failing from yesterday on Windows for metrics: sari and wiki_split
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_sari - ...
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_wiki_split
```
See: https://app.circleci.com/pipelines/github/huggingface/datasets/11928/workflows/79daa5e7-65c9-4e85-829b-00d2bfbd076a/jobs/71594 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4341/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4341/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5846/comments | https://api.github.com/repos/huggingface/datasets/issues/5846/events | https://github.com/huggingface/datasets/issues/5846 | 1,706,289,290 | I_kwDODunzps5ls-iK | 5,846 | load_dataset('bigcode/the-stack-dedup', streaming=True) very slow! | [] | open | false | null | 4 | 2023-05-11T17:58:57Z | 2023-05-16T03:23:46Z | null | null | ### Describe the bug
Running
```
import datasets
ds = datasets.load_dataset('bigcode/the-stack-dedup', streaming=True)
```
takes about 2.5 minutes!
I would expect this to be near instantaneous. With other datasets, the runtime is one or two seconds.
### Environment info
- `datasets` version: 2.11.0
- Platform: macOS-13.3.1-arm64-arm-64bit
- Python version: 3.10.10
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5846/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5846/timeline | null | null | null | null | false | [
"This is due to the slow resolution of the data files: https://github.com/huggingface/datasets/issues/5537.\r\n\r\nWe plan to switch to `huggingface_hub`'s `HfFileSystem` soon to make the resolution faster (will be up to 20x faster once we merge https://github.com/huggingface/huggingface_hub/pull/1443)\r\n\r\n",
... |
https://api.github.com/repos/huggingface/datasets/issues/3855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3855/comments | https://api.github.com/repos/huggingface/datasets/issues/3855/events | https://github.com/huggingface/datasets/issues/3855 | 1,162,448,589 | I_kwDODunzps5FSY7N | 3,855 | Bad error message when loading private dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-03-08T09:55:17Z | 2022-07-11T15:06:40Z | 2022-07-11T15:06:40Z | null | ## Describe the bug
A pretty common behavior of an interaction between the Hub and datasets is the following.
An organization adds a dataset in private mode and wants to load it afterward.
```python
from transformers import load_dataset
ds = load_dataset("NewT5/dummy_data", "dummy")
```
This command then fails with:
```bash
FileNotFoundError: Couldn't find a dataset script at /home/patrick/NewT5/dummy_data/dummy_data.py or any data file in the same directory. Couldn't find 'NewT5/dummy_data' on the Hugging Face Hub either: FileNotFoundError: Dataset 'NewT5/dummy_data' doesn't exist on the Hub
```
**even though** the user has access to the website `NewT5/dummy_data` since she/he is part of the org.
We need to improve the error message here similar to how @sgugger, @LysandreJik and @julien-c have done it for transformers IMO.
## Steps to reproduce the bug
E.g. execute the following code to see the different error messages between `transformes` and `datasets`.
1. Transformers
```python
from transformers import BertModel
BertModel.from_pretrained("NewT5/dummy_model")
```
The error message is clearer here - it gives:
```
OSError: patrickvonplaten/gpt2-xl is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo with `use_auth_token` or log in with `huggingface-cli login` and pass `use_auth_token=True`.
```
Let's maybe do the same for datasets? The PR was introduced to `transformers` here:
https://github.com/huggingface/transformers/pull/15261
## Expected results
Better error message
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4.dev0
- Platform: Linux-5.15.15-76051515-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3855/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3855/timeline | null | completed | null | null | false | [
"We raise the error “ FileNotFoundError: can’t find the dataset” mainly to follow best practice in security (otherwise users could be able to guess what private repositories users/orgs may have)\r\n\r\nWe can indeed reformulate this and add the \"If this is a private repository,...\" part !",
"Resolved via https:... |
https://api.github.com/repos/huggingface/datasets/issues/3748 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3748/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3748/comments | https://api.github.com/repos/huggingface/datasets/issues/3748/events | https://github.com/huggingface/datasets/pull/3748 | 1,142,128,763 | PR_kwDODunzps4zCEyM | 3,748 | Add tqdm arguments | [] | closed | false | null | 0 | 2022-02-18T00:47:55Z | 2022-02-18T00:59:15Z | 2022-02-18T00:59:15Z | null | In this PR, there are two changes.
1. It is able to show the progress bar by adding the length of the iterator.
2. Pass in tqdm_kwargs so that can enable more feasibility for the control of tqdm library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3748/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3748/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3748.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3748",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3748.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3748"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4693/comments | https://api.github.com/repos/huggingface/datasets/issues/4693/events | https://github.com/huggingface/datasets/pull/4693 | 1,306,788,322 | PR_kwDODunzps47go-F | 4,693 | update `samsum` script | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 2 | 2022-07-16T11:53:05Z | 2022-09-23T11:40:11Z | 2022-09-23T11:37:57Z | null | update `samsum` script after #4672 was merged (citation is also updated) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4693/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4693",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4693"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"We are closing PRs to dataset scripts because we are moving them to the Hub.\r\n\r\nThanks anyway.\r\n\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/960/comments | https://api.github.com/repos/huggingface/datasets/issues/960/events | https://github.com/huggingface/datasets/pull/960 | 754,422,710 | MDExOlB1bGxSZXF1ZXN0NTMwMzI1MzUx | 960 | Add code to automate parts of the dataset card | [] | closed | false | null | 0 | 2020-12-01T14:04:51Z | 2021-04-26T07:56:01Z | 2021-04-26T07:56:01Z | null | Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/960/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/960.diff",
"html_url": "https://github.com/huggingface/datasets/pull/960",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/960.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/960"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6058 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6058/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6058/comments | https://api.github.com/repos/huggingface/datasets/issues/6058/events | https://github.com/huggingface/datasets/issues/6058 | 1,815,131,397 | I_kwDODunzps5sMLUF | 6,058 | laion-coco download error | [] | closed | false | null | 1 | 2023-07-21T04:24:15Z | 2023-07-22T01:42:06Z | 2023-07-22T01:42:06Z | null | ### Describe the bug
The full trace:
```
/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py:1744: FutureWarning: 'ignore_verifications' was de
precated in favor of 'verification_mode' in version 2.9.1 and will be removed in 3.0.0.
You can remove this warning by passing 'verification_mode=no_checks' instead.
warnings.warn(
Downloading and preparing dataset parquet/laion--laion-coco to /home/bian/.cache/huggingface/datasets/laion___parquet/laion--
laion-coco-cb4205d7f1863066/0.0.0/bcacc8bdaa0614a5d73d0344c813275e590940c6ea8bc569da462847103a1afd...
Downloading data: 100%|█| 1.89G/1.89G [04:57<00:00,
Downloading data files: 100%|█| 1/1 [04:59<00:00, 2
Extracting data files: 100%|█| 1/1 [00:00<00:00, 13
Generating train split: 0 examples [00:00, ? examples/s]<_io.BufferedReader
name='/home/bian/.cache/huggingface/datasets/downlo
ads/26d7a016d25bbd9443115cfa3092136e8eb2f1f5bcd4154
0cb9234572927f04c'>
Traceback (most recent call last):
File "/home/bian/data/ZOC/download_laion_coco.py", line 4, in <module>
dataset = load_dataset("laion/laion-coco", ignore_verifications=True)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/load.py", line 1791, in load_dataset
builder_instance.download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 891, in download_and_prepare
self._download_and_prepare(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 986, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1748, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/builder.py", line 1842, in _prepare_split_single
generator = self._generate_tables(**gen_kwargs)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/datasets/packaged_modules/parquet/parquet.py", line 67, in
_generate_tables
parquet_file = pq.ParquetFile(f)
File "/home/bian/anaconda3/envs/sd/lib/python3.10/site-packages/pyarrow/parquet/core.py", line 323, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1227, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file
.
```
I have carefully followed the instructions in #5264 but still get the same error.
Other helpful information:
```
ds = load_dataset("parquet", data_files=
...: "https://huggingface.co/datasets/laion/l
...: aion-coco/resolve/d22869de3ccd39dfec1507
...: f7ded32e4a518dad24/part-00000-2256f782-1
...: 26f-4dc6-b9c6-e6757637749d-c000.snappy.p
...: arquet")
Found cached dataset parquet (/home/bian/.cache/huggingface/datasets/parquet/default-a02eea00aeb08b0e/0.0.0/bb8ccf89d9ee38581ff5e51506d721a9b37f14df8090dc9b2d8fb4a40957833f)
100%|██████████████| 1/1 [00:00<00:00, 4.55it/s]
```
### Steps to reproduce the bug
```
from datasets import load_dataset
dataset = load_dataset("laion/laion-coco", ignore_verifications=True/False)
```
### Expected behavior
Properly load Laion-coco dataset
### Environment info
datasets==2.11.0 torch==1.12.1 python 3.10 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6058/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6058/timeline | null | completed | null | null | false | [
"This can also mean one of the files was not downloaded correctly.\r\n\r\nWe log an erroneous file's name before raising the reader's error, so this is how you can find the problematic file. Then, you should delete it and call `load_dataset` again.\r\n\r\n(I checked all the uploaded files, and they seem to be valid... |
https://api.github.com/repos/huggingface/datasets/issues/839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/839/comments | https://api.github.com/repos/huggingface/datasets/issues/839/events | https://github.com/huggingface/datasets/issues/839 | 740,355,270 | MDU6SXNzdWU3NDAzNTUyNzA= | 839 | XSum dataset missing spaces between sentences | [] | open | false | null | 0 | 2020-11-11T00:34:43Z | 2020-11-11T00:34:43Z | null | null | I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like this morning 'Oh I think you're nominated'", said Dappy."And I was like 'Oh yeah, which one?' And now we've got nominated for four awards. I mean, wow!"Bandmate Fazer added: "We thought it's best of us to come down and mingle with everyone and say hello to the cameras. And now we find we've got four nominations."The band have two shots at the best song prize, getting the nod for their Tynchy Stryder collaboration Number One, and single Strong Again.Their album Uncle B will also go up against records by the likes of Beyonce and Kanye West.N-Dubz picked up the best newcomer Mobo in 2007, but female member Tulisa said they wouldn't be too disappointed if they didn't win this time around."At the end of the day we're grateful to be where we are in our careers."If it don't happen then it don't happen - live to fight another day and keep on making albums and hits for the fans."Dappy also revealed they could be performing live several times on the night.The group will be doing Number One and also a possible rendition of the War Child single, I Got Soul.The charity song is a re-working of The Killers' All These Things That I've Done and is set to feature artists like Chipmunk, Ironik and Pixie Lott.This year's Mobos will be held outside of London for the first time, in Glasgow on 30 September.N-Dubz said they were looking forward to performing for their Scottish fans and boasted about their recent shows north of the border."We just done Edinburgh the other day," said Dappy."We smashed up an N-Dubz show over there. We done Aberdeen about three or four months ago - we smashed up that show over there! Everywhere we go we smash it up!"` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/839/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/443 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/443/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/443/comments | https://api.github.com/repos/huggingface/datasets/issues/443/events | https://github.com/huggingface/datasets/issues/443 | 666,246,716 | MDU6SXNzdWU2NjYyNDY3MTY= | 443 | Cannot unpickle saved .pt dataset with torch.save()/load() | [] | closed | false | null | 1 | 2020-07-27T12:13:37Z | 2020-07-27T13:05:11Z | 2020-07-27T13:05:11Z | null | Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling:
```python
>>> import torch
>>> import nlp
>>> squad = nlp.load_dataset("squad.py", split="train")
>>> squad
Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype='string', id=None)}, num_rows: 87599)
>>> squad = squad.map(create_features, batched=True)
>>> squad.set_format(type="torch", columns=["source_ids", "target_ids", "attention_mask"])
>>> torch.save(squad, "squad.pt")
>>> squad_pt = torch.load("squad.pt")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 593, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/torch/serialization.py", line 773, in _legacy_load
result = unpickler.load()
File "/home/vegarab/.conda/envs/torch/lib/python3.7/site-packages/nlp/splits.py", line 493, in __setitem__
raise ValueError("Cannot add elem. Use .add() instead.")
ValueError: Cannot add elem. Use .add() instead.
```
where `create_features` is a function that tokenizes the data using `batch_encode_plus` and returns a Dict with `input_ids`, `target_ids` and `attention_mask`.
```python
def create_features(batch):
source_text_encoding = tokenizer.batch_encode_plus(
batch["source_text"],
max_length=max_source_length,
pad_to_max_length=True,
truncation=True)
target_text_encoding = tokenizer.batch_encode_plus(
batch["target_text"],
max_length=max_target_length,
pad_to_max_length=True,
truncation=True)
features = {
"source_ids": source_text_encoding["input_ids"],
"target_ids": target_text_encoding["input_ids"],
"attention_mask": source_text_encoding["attention_mask"]
}
return features
```
I found a similar issue in [issue 5267 in the huggingface/transformers repo](https://github.com/huggingface/transformers/issues/5267) which was solved by downgrading to `nlp==0.2.0`. That did not solve this problem, however. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/443/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/443/timeline | null | completed | null | null | false | [
"This seems to be fixed in a non-released version. \r\n\r\nInstalling nlp from source\r\n```\r\ngit clone https://github.com/huggingface/nlp\r\ncd nlp\r\npip install .\r\n```\r\nsolves the issue. "
] |
https://api.github.com/repos/huggingface/datasets/issues/2054 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2054/comments | https://api.github.com/repos/huggingface/datasets/issues/2054/events | https://github.com/huggingface/datasets/issues/2054 | 831,597,665 | MDU6SXNzdWU4MzE1OTc2NjU= | 2,054 | Could not find file for ZEST dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 4 | 2021-03-15T09:11:58Z | 2021-05-03T09:30:24Z | 2021-05-03T09:30:24Z | null | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca...
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
<ipython-input-6-18dbbc1a4b8a> in <module>()
1 from datasets import load_dataset
2
----> 3 dataset = load_dataset("zest")
9 frames
/usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token)
612 )
613 elif response is not None and response.status_code == 404:
--> 614 raise FileNotFoundError("Couldn't find file at {}".format(url))
615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}")
616 raise ConnectionError("Couldn't reach {}".format(url))
FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2054/timeline | null | completed | null | null | false | [
"The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.",
"This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)",
"Thanks @lhoestq and @matt-peters ",
"I am closing this issue since its ... |
https://api.github.com/repos/huggingface/datasets/issues/247 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/247/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/247/comments | https://api.github.com/repos/huggingface/datasets/issues/247/events | https://github.com/huggingface/datasets/pull/247 | 632,380,078 | MDExOlB1bGxSZXF1ZXN0NDI5MTMwMzQ2 | 247 | Make all dataset downloads deterministic by applying `sorted` to glob and os.listdir | [] | closed | false | null | 3 | 2020-06-06T11:02:10Z | 2020-06-08T09:18:16Z | 2020-06-08T09:18:14Z | null | This PR makes all datasets loading deterministic by applying `sorted()` to all `glob.glob` and `os.listdir` statements.
Are there other "non-deterministic" functions apart from `glob.glob()` and `os.listdir()` that you can think of @thomwolf @lhoestq @mariamabarham @jplu ?
**Important**
It does break backward compatibility for these datasets because
1. When loading the complete dataset the order in which the examples are saved is different now
2. When loading only part of a split, the examples themselves might be different.
@patrickvonplaten - the nlp / longformer notebook has to be updated since the examples might now be different | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/247/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/247/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/247.diff",
"html_url": "https://github.com/huggingface/datasets/pull/247",
"merged_at": "2020-06-08T09:18:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/247.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/247"
} | true | [
"That's great!\r\n\r\nI think it would be nice to test \"deterministic-ness\" of datasets in CI if we can do it (should be left for future PR of course)\r\n\r\nHere is a possibility (maybe there are other ways to do it): given that we may soon have efficient and large-scale hashing (cf our discussion on versioning/... |
https://api.github.com/repos/huggingface/datasets/issues/1942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1942/comments | https://api.github.com/repos/huggingface/datasets/issues/1942/events | https://github.com/huggingface/datasets/issues/1942 | 816,037,520 | MDU6SXNzdWU4MTYwMzc1MjA= | 1,942 | [experiment] missing default_experiment-1-0.arrow | [] | closed | false | null | 18 | 2021-02-25T03:02:15Z | 2022-10-05T13:08:45Z | 2022-10-05T13:08:45Z | null | the original report was pretty bad and incomplete - my apologies!
Please see the complete version here: https://github.com/huggingface/datasets/issues/1942#issuecomment-786336481
------------
As mentioned here https://github.com/huggingface/datasets/issues/1939 metrics don't get cached, looking at my local `~/.cache/huggingface/metrics` - there are many `*.arrow.lock` files but zero metrics files.
w/o the network I get:
```
FileNotFoundError: [Errno 2] No such file or directory: '~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow
```
there is just `~/.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow.lock`
I did run the same `run_seq2seq.py` script on the instance with network and it worked just fine, but only the lock file was left behind.
this is with master.
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1942/timeline | null | completed | null | null | false | [
"Hi !\r\n\r\nThe cache at `~/.cache/huggingface/metrics` stores the users data for metrics computations (hence the arrow files).\r\n\r\nHowever python modules (i.e. dataset scripts, metric scripts) are stored in `~/.cache/huggingface/modules/datasets_modules`.\r\n\r\nIn particular the metrics are cached in `~/.cach... |
https://api.github.com/repos/huggingface/datasets/issues/1999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1999/comments | https://api.github.com/repos/huggingface/datasets/issues/1999/events | https://github.com/huggingface/datasets/pull/1999 | 823,753,591 | MDExOlB1bGxSZXF1ZXN0NTg2MTM5ODMy | 1,999 | Add FashionMNIST dataset | [] | closed | false | null | 1 | 2021-03-06T21:36:57Z | 2021-03-09T09:52:11Z | 2021-03-09T09:52:11Z | null | This PR adds [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1999/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1999",
"merged_at": "2021-03-09T09:52:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1999"
} | true | [
"Hi @lhoestq,\r\n\r\nI have added the changes from the review."
] |
https://api.github.com/repos/huggingface/datasets/issues/1052 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1052/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1052/comments | https://api.github.com/repos/huggingface/datasets/issues/1052/events | https://github.com/huggingface/datasets/pull/1052 | 756,171,798 | MDExOlB1bGxSZXF1ZXN0NTMxNzU5MjA0 | 1,052 | add sharc dataset | [] | closed | false | null | 0 | 2020-12-03T12:57:23Z | 2020-12-03T16:44:21Z | 2020-12-03T14:09:54Z | null | This PR adds the ShARC dataset.
More info:
https://sharc-data.github.io/index.html | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1052/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1052/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1052.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1052",
"merged_at": "2020-12-03T14:09:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1052.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1052"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1133/comments | https://api.github.com/repos/huggingface/datasets/issues/1133/events | https://github.com/huggingface/datasets/pull/1133 | 757,307,660 | MDExOlB1bGxSZXF1ZXN0NTMyNzA1ODQ4 | 1,133 | Adding XQUAD-R Dataset | [] | closed | false | null | 0 | 2020-12-04T18:22:29Z | 2020-12-04T18:28:54Z | 2020-12-04T18:28:49Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1133/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1133/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1133.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1133",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1133.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1133"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/3538 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3538/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3538/comments | https://api.github.com/repos/huggingface/datasets/issues/3538/events | https://github.com/huggingface/datasets/pull/3538 | 1,094,756,755 | PR_kwDODunzps4wlLmD | 3,538 | Readme usage update | [] | closed | false | null | 0 | 2022-01-05T21:26:28Z | 2022-01-05T23:34:25Z | 2022-01-05T23:24:15Z | null | Noticing that the recent commit throws a lot of errors in the automatic checks. It looks to me that those errors are simply errors that were already there (metadata issues), unrelated to what I've just changed, but worth another look to make sure. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3538/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3538/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3538.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3538",
"merged_at": "2022-01-05T23:24:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3538.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3538"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2194/comments | https://api.github.com/repos/huggingface/datasets/issues/2194/events | https://github.com/huggingface/datasets/issues/2194 | 853,909,452 | MDU6SXNzdWU4NTM5MDk0NTI= | 2,194 | py3.7: TypeError: can't pickle _LazyModule objects | [] | closed | false | null | 1 | 2021-04-08T21:02:48Z | 2021-04-09T16:56:50Z | 2021-04-09T01:52:57Z | null | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \
--per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \
--fp16
```
```
Traceback (most recent call last):
File "examples/language-modeling/run_clm.py", line 453, in <module>
main()
File "examples/language-modeling/run_clm.py", line 336, in main
load_from_cache_file=not data_args.overwrite_cache,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp>
for k, dataset in self.items()
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map
update_data=update_data,
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper
self._fingerprint, transform, kwargs_for_fingerprint
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint
hasher.update(transform_args[key])
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update
self.m.update(self.hash(value).encode("utf-8"))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash
return cls.hash_default(value)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default
return cls.hash_bytes(dumps(value))
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps
dump(obj, file)
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump
Pickler(file, recurse=True).dump(obj)
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump
StockPickler.dump(self, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump
self.save(obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function
obj=obj,
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce
save(args)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple
save(element)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save
f(self, obj) # Call unbound method with explicit self
File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict
StockPickler.save_dict(pickler, obj)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict
self._batch_setitems(obj.items())
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems
save(v)
File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save
rv = reduce(self.proto)
TypeError: can't pickle _LazyModule objects
```
```
$ python --version
Python 3.7.4
$ python -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.8.0.dev20210110+cu110
Is debug build: False
CUDA used to build PyTorch: 11.0
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.2 LTS (x86_64)
GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.16.3
```
Thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2194/timeline | null | completed | null | null | false | [
"\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/1072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1072/comments | https://api.github.com/repos/huggingface/datasets/issues/1072/events | https://github.com/huggingface/datasets/pull/1072 | 756,454,511 | MDExOlB1bGxSZXF1ZXN0NTMxOTk2Njky | 1,072 | actually uses the previously declared VERSION on the configs in the template | [] | closed | false | null | 0 | 2020-12-03T18:44:27Z | 2020-12-03T19:35:46Z | 2020-12-03T19:35:46Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1072",
"merged_at": "2020-12-03T19:35:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1072"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/406/comments | https://api.github.com/repos/huggingface/datasets/issues/406/events | https://github.com/huggingface/datasets/issues/406 | 658,581,764 | MDU6SXNzdWU2NTg1ODE3NjQ= | 406 | Faster Shuffling? | [] | closed | false | null | 4 | 2020-07-16T21:21:53Z | 2020-09-07T14:45:26Z | 2020-09-07T14:45:25Z | null | Consider shuffling bookcorpus:
```
dataset = nlp.load_dataset('bookcorpus', split='train')
dataset.shuffle()
```
According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`.
But I can also just write the lines to a text file:
```
batch_size = 100000
with open('tmp.txt', 'w+') as out_f:
for i in tqdm(range(0, len(dataset), batch_size)):
batch = dataset[i:i+batch_size]['text']
print("\n".join(batch), file=out_f)
```
Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally,
```
dataset = nlp.load_dataset('text', data_files='tmp2.txt')
```
Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping.
Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/406/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/406/timeline | null | completed | null | null | false | [
"I think the slowness here probably come from the fact that we are copying from and to python.\r\n\r\n@lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?",
"> @lhoestq for all the `select... |
https://api.github.com/repos/huggingface/datasets/issues/4539 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4539/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4539/comments | https://api.github.com/repos/huggingface/datasets/issues/4539/events | https://github.com/huggingface/datasets/pull/4539 | 1,279,779,829 | PR_kwDODunzps46GfWv | 4,539 | Replace deprecated logging.warn with logging.warning | [] | closed | false | null | 0 | 2022-06-22T08:32:29Z | 2022-06-22T13:43:23Z | 2022-06-22T12:51:51Z | null | Replace `logging.warn` (deprecated in [Python 2.7, 2011](https://github.com/python/cpython/commit/04d5bc00a219860c69ea17eaa633d3ab9917409f)) with `logging.warning` (added in [Python 2.3, 2003](https://github.com/python/cpython/commit/6fa635df7aa88ae9fd8b41ae42743341316c90f7)).
* https://docs.python.org/3/library/logging.html#logging.Logger.warning
* https://github.com/python/cpython/issues/57444
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4539/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4539/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4539.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4539",
"merged_at": "2022-06-22T12:51:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4539.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4539"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5706/comments | https://api.github.com/repos/huggingface/datasets/issues/5706/events | https://github.com/huggingface/datasets/issues/5706 | 1,653,545,835 | I_kwDODunzps5ijxtr | 5,706 | Support categorical data types for Parquet | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 4 | 2023-04-04T09:45:35Z | 2023-05-12T19:21:43Z | null | null | ### Feature request
Huggingface datasets does not seem to support categorical / dictionary data types for Parquet as of now. There seems to be a `TODO` in the code for this feature but no implementation yet. Below you can find sample code to reproduce the error that is currently thrown when attempting to read a Parquet file with categorical columns:
```python
import pandas as pd
import pyarrow.parquet as pq
from datasets import load_dataset
# Create categorical sample DataFrame
df = pd.DataFrame({'type': ['foo', 'bar']}).astype('category')
df.to_parquet('data.parquet')
# Read back as pyarrow table
table = pq.read_table('data.parquet')
print(table.schema)
# type: dictionary<values=string, indices=int32, ordered=0>
# Load with huggingface datasets
load_dataset('parquet', data_files='data.parquet')
```
Error:
```
Traceback (most recent call last):
File ".venv/lib/python3.10/site-packages/datasets/builder.py", line 1875, in _prepare_split_single
writer.write_table(table)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 566, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File ".venv/lib/python3.10/site-packages/datasets/arrow_writer.py", line 379, in _build_writer
inferred_features = Features.from_arrow_schema(inferred_schema)
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in from_arrow_schema
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1622, in <dictcomp>
obj = {field.name: generate_from_arrow_type(field.type) for field in pa_schema}
File ".venv/lib/python3.10/site-packages/datasets/features/features.py", line 1361, in generate_from_arrow_type
raise NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table
NotImplementedError
```
### Motivation
Categorical data types, as offered by Pandas and implemented with the `DictionaryType` dtype in `pyarrow` can significantly reduce dataset size and are a handy way to turn textual features into numerical representations and back. Lack of support in Huggingface datasets greatly reduces compatibility with a common Pandas / Parquet feature.
### Your contribution
I could provide a PR. However, it would be nice to have an initial complexity estimate from one of the core developers first. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5706/timeline | null | null | null | null | false | [
"Hi ! We could definitely a type that holds the categories and uses a DictionaryType storage. There's a ClassLabel type that is similar with a 'names' parameter (similar to a id2label in deep learning frameworks) that uses an integer array as storage.\r\n\r\nIt can be added in `features.py`. Here are some pointers:... |
https://api.github.com/repos/huggingface/datasets/issues/2676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2676/comments | https://api.github.com/repos/huggingface/datasets/issues/2676/events | https://github.com/huggingface/datasets/pull/2676 | 947,734,909 | MDExOlB1bGxSZXF1ZXN0NjkyNjc2NTg5 | 2,676 | Increase json reader block_size automatically | [] | closed | false | null | 0 | 2021-07-19T14:51:14Z | 2021-07-19T17:51:39Z | 2021-07-19T17:51:38Z | null | Currently some files can't be read with the default parameters of the JSON lines reader.
For example this one:
https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz
raises a pyarrow error:
```python
ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?)
```
The block size that is used is the default one by pyarrow (related to this [jira issue](https://issues.apache.org/jira/browse/ARROW-9612)).
To fix this issue I changed the block_size to increase automatically if there is a straddling issue when parsing a batch of json lines.
By default the value is `chunksize // 32` in order to leverage multithreading, and it doubles every time a straddling issue occurs. The block_size is then reset for each file.
cc @thomwolf @albertvillanova | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2676/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2676/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2676.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2676",
"merged_at": "2021-07-19T17:51:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2676.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2676"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1575 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1575/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1575/comments | https://api.github.com/repos/huggingface/datasets/issues/1575/events | https://github.com/huggingface/datasets/pull/1575 | 767,076,374 | MDExOlB1bGxSZXF1ZXN0NTM5OTEzNzgx | 1,575 | Hind_Encorp all done | [] | closed | false | null | 11 | 2020-12-15T01:36:02Z | 2020-12-16T15:15:17Z | 2020-12-16T15:15:17Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1575/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1575/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1575.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1575",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1575.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1575"
} | true | [
"ALL TEST PASSED locally @yjernite ",
"@rahul-art kindly run the following from the datasets folder \r\n\r\n```\r\nmake style \r\nflake8 datasets\r\n\r\n```\r\n",
"@skyprince999 I did that before it says all done \r\n",
"I did that again it gives the same output all done and then I synchronized my changes ... | |
https://api.github.com/repos/huggingface/datasets/issues/4331 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4331/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4331/comments | https://api.github.com/repos/huggingface/datasets/issues/4331/events | https://github.com/huggingface/datasets/pull/4331 | 1,234,016,110 | PR_kwDODunzps43uN2R | 4,331 | Adding eval metadata to Amazon Polarity | [] | closed | false | null | 0 | 2022-05-12T13:47:59Z | 2022-05-12T21:03:14Z | 2022-05-12T21:03:13Z | null | Adding eval metadata to Amazon Polarity | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4331/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4331/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4331.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4331",
"merged_at": "2022-05-12T21:03:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4331.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4331"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/853/comments | https://api.github.com/repos/huggingface/datasets/issues/853/events | https://github.com/huggingface/datasets/issues/853 | 743,426,583 | MDU6SXNzdWU3NDM0MjY1ODM= | 853 | concatenate_datasets support axis=0 or 1 ? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "008672",
"default": true... | closed | false | null | 10 | 2020-11-16T02:46:23Z | 2021-04-19T16:07:18Z | 2021-04-19T16:07:18Z | null | I want to achieve the following result

| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/853/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/853/timeline | null | completed | null | null | false | [
"Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_colum... |
https://api.github.com/repos/huggingface/datasets/issues/3329 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3329/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3329/comments | https://api.github.com/repos/huggingface/datasets/issues/3329/events | https://github.com/huggingface/datasets/issues/3329 | 1,065,096,971 | I_kwDODunzps4_fBcL | 3,329 | Map function: Type error on iter #999 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-11-27T17:53:05Z | 2021-11-29T20:40:15Z | 2021-11-29T20:40:15Z | null | ## Describe the bug
Using the map function, it throws a type error on iter #999
Here is the code I am calling:
```
dataset = datasets.load_dataset('squad')
dataset['validation'].map(text_numbers_to_int, input_columns=['context'], fn_kwargs={'column': 'context'})
```
text_numbers_to_int returns the input text with numbers replaced in the format {'context': text}
It happens at
`
File "C:\Users\lonek\anaconda3\envs\ai\Lib\site-packages\datasets\arrow_writer.py", line 289, in <listcomp>
[row[0][col] for row in self.current_examples], type=col_type, try_type=col_try_type, col=col
`
The issue is that the list comprehension expects self.current_examples to be type tuple(dict, str), but for some reason 26 out of 1000 of the sefl.current_examples are type tuple(str, str)
Here is an example of what self.current_examples should be
({'context': 'Super Bowl 50 was an...merals 50.'}, '')
Here is an example of what self.current_examples are when it throws the error:
('The Panthers used th... Marriott.', '')
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3329/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3329/timeline | null | completed | null | null | false | [
"Hi, thanks for reporting.\r\n\r\nIt would be really helpful if you could provide the actual code of the `text_numbers_to_int` function so we can reproduce the error.",
"```\r\ndef text_numbers_to_int(text, column=\"\"):\r\n \"\"\"\r\n Convert text numbers to int.\r\n\r\n :param text: text numbers\r\n ... |
https://api.github.com/repos/huggingface/datasets/issues/5980 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5980/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5980/comments | https://api.github.com/repos/huggingface/datasets/issues/5980/events | https://github.com/huggingface/datasets/issues/5980 | 1,770,255,973 | I_kwDODunzps5pg_Zl | 5,980 | Viewing dataset card returns “502 Bad Gateway” | [] | closed | false | null | 3 | 2023-06-22T19:14:48Z | 2023-06-27T08:38:19Z | 2023-06-26T14:42:45Z | null | The url is: https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams
I am able to successfully view the “Files and versions” tab: [Confirm-Labs/pile_ngrams_trigrams at main](https://huggingface.co/datasets/Confirm-Labs/pile_ngrams_trigrams/tree/main)
Any help would be appreciated! Thanks! I hope this is the right place to report an issue like this.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5980/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5980/timeline | null | completed | null | null | false | [
"Can you try again? Maybe there was a minor outage.",
"Yes, it seems to be working now. In case it's helpful, the outage lasted several days. It was failing as late as yesterday morning. ",
"we fixed something on the server side, glad it's fixed now"
] |
https://api.github.com/repos/huggingface/datasets/issues/5200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5200/comments | https://api.github.com/repos/huggingface/datasets/issues/5200/events | https://github.com/huggingface/datasets/issues/5200 | 1,435,831,559 | I_kwDODunzps5VlQ0H | 5,200 | Some links to canonical datasets in the docs are outdated | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-11-04T10:06:21Z | 2022-11-07T18:40:20Z | 2022-11-07T18:40:20Z | null | As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by links to the corresponding datasets on the Hub. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5200/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5200/timeline | null | completed | null | null | false | [
"Thanks for catching this, I can go through the docs and replace the links to their corresponding datasets on the Hub!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3357/comments | https://api.github.com/repos/huggingface/datasets/issues/3357/events | https://github.com/huggingface/datasets/pull/3357 | 1,068,607,382 | PR_kwDODunzps4vQmcL | 3,357 | Update languages in aeslc dataset card | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 0 | 2021-12-01T16:20:46Z | 2022-09-23T13:16:49Z | 2022-09-23T13:16:49Z | null | After having worked a bit with the dataset.
As far as I know, it is solely in English (en-US). There are only a few mails in Spanish, French or German (less than a dozen I would estimate). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3357/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3357",
"merged_at": "2022-09-23T13:16:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3357"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/749 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/749/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/749/comments | https://api.github.com/repos/huggingface/datasets/issues/749/events | https://github.com/huggingface/datasets/issues/749 | 726,366,062 | MDU6SXNzdWU3MjYzNjYwNjI= | 749 | [XGLUE] Adding new dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 15 | 2020-10-21T10:51:36Z | 2022-09-30T11:35:30Z | 2021-01-06T10:02:55Z | null | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/749/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/749/timeline | null | completed | null | null | false | [
"Amazing! ",
"Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .\r\n\r\nAs stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language ... |
https://api.github.com/repos/huggingface/datasets/issues/1498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1498/comments | https://api.github.com/repos/huggingface/datasets/issues/1498/events | https://github.com/huggingface/datasets/pull/1498 | 763,303,606 | MDExOlB1bGxSZXF1ZXN0NTM3Nzc2MjM5 | 1,498 | add stereoset | [] | closed | false | null | 0 | 2020-12-12T05:04:37Z | 2020-12-18T10:03:53Z | 2020-12-18T10:03:53Z | null | StereoSet is a dataset that measures stereotype bias in language models. StereoSet consists of 17,000 sentences that measures model preferences across gender, race, religion, and profession. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1498/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1498.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1498",
"merged_at": "2020-12-18T10:03:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1498.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1498"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1558 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1558/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1558/comments | https://api.github.com/repos/huggingface/datasets/issues/1558/events | https://github.com/huggingface/datasets/pull/1558 | 765,707,907 | MDExOlB1bGxSZXF1ZXN0NTM5MDQ2MzA4 | 1,558 | Adding Igbo NER data | [] | closed | false | null | 3 | 2020-12-13T23:52:11Z | 2020-12-21T14:38:20Z | 2020-12-21T14:38:20Z | null | This PR adds the Igbo NER dataset.
Data: https://github.com/IgnatiusEzeani/IGBONLP/tree/master/ig_ner | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1558/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1558/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1558.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1558",
"merged_at": "2020-12-21T14:38:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1558.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1558"
} | true | [
"Thanks for the PR @purvimisal. \r\n\r\nFew comments below. ",
"Hi, @lhoestq Thank you for the review. I have made all the changes. PTAL! ",
"the CI error is not related to your dataset, merging"
] |
https://api.github.com/repos/huggingface/datasets/issues/1470 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1470/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1470/comments | https://api.github.com/repos/huggingface/datasets/issues/1470/events | https://github.com/huggingface/datasets/pull/1470 | 761,791,065 | MDExOlB1bGxSZXF1ZXN0NTM2NDA2MjQx | 1,470 | Add wiki lingua dataset | [] | closed | false | null | 7 | 2020-12-11T02:04:18Z | 2020-12-16T15:27:13Z | 2020-12-16T15:27:13Z | null | Hello @lhoestq ,
I am opening a fresh pull request as advised in my original PR https://github.com/huggingface/datasets/pull/1308
Thanks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1470/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1470/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1470.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1470",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1470.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1470"
} | true | [
"it’s failing because of `RemoteDatasetTest.test_load_dataset_orange_sum`\r\nwhich i think is not the dataset you are doing a PR for. Try rebasing with:\r\n```\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit push -u -f origin your_branch\r\n```",
"> it’s failing because of `RemoteDatasetTest.test_load_... |
https://api.github.com/repos/huggingface/datasets/issues/4513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4513/comments | https://api.github.com/repos/huggingface/datasets/issues/4513/events | https://github.com/huggingface/datasets/pull/4513 | 1,273,450,338 | PR_kwDODunzps45xTqv | 4,513 | Update Google Cloud Storage documentation and add Azure Blob Storage example | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 5 | 2022-06-16T11:46:09Z | 2022-06-23T17:05:11Z | 2022-06-23T16:54:59Z | null | While I was going through the 🤗 Datasets documentation of the Cloud storage filesystems at https://huggingface.co/docs/datasets/filesystems, I realized that the Google Cloud Storage documentation could be improved e.g. bullet point says "Load your dataset" when the actual call was to "Save your dataset", in-line code comment was mentioning "s3 bucket" instead of "gcs bucket", and some more in-line comments could be included.
Also, I think that mixing Google Cloud Storage documentation with AWS S3's one was a little bit confusing, so I moved all those to the end of the document under an h2 tab named "Other filesystems", with an h3 for "Google Cloud Storage".
Besides that, I was currently working with Azure Blob Storage and found out that the URL to [adlfs](https://github.com/fsspec/adlfs) was common for both filesystems Azure Blob Storage and Azure DataLake Storage, as well as the URL, which was updated even though the redirect was working fine, so I decided to group those under the same row in the column of supported filesystems.
And took also the change to add a small documentation entry as for Google Cloud Storage but for Azure Blob Storage, as I assume that AWS S3, GCP Cloud Storage, and Azure Blob Storage, are the most used cloud storage providers.
Let me know if you're OK with these changes, or whether you want me to roll back some of those! :hugs: | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4513/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4513/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4513.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4513",
"merged_at": "2022-06-23T16:54:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4513.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4513"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @stevhliu, I've kept the `>>>` before all the in-line code comments as it was done like that in the default S3 example that was already there, I assume that it's done like that just for readiness, let me know whether we should rem... |
https://api.github.com/repos/huggingface/datasets/issues/2412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2412/comments | https://api.github.com/repos/huggingface/datasets/issues/2412/events | https://github.com/huggingface/datasets/issues/2412 | 903,769,151 | MDU6SXNzdWU5MDM3NjkxNTE= | 2,412 | Docstring mistake: dataset vs. metric | [] | closed | false | null | 1 | 2021-05-27T13:39:11Z | 2021-06-01T08:18:04Z | 2021-06-01T08:18:04Z | null | This:
https://github.com/huggingface/datasets/blob/d95b95f8cf3cb0cff5f77a675139b584dcfcf719/src/datasets/load.py#L582
Should better be something like:
`a metric identifier on HuggingFace AWS bucket (list all available metrics and ids with ``datasets.list_metrics()``)`
I can provide a PR l8er... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2412/timeline | null | completed | null | null | false | [
"> I can provide a PR l8er...\r\n\r\nSee #2425 "
] |
https://api.github.com/repos/huggingface/datasets/issues/5472 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5472/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5472/comments | https://api.github.com/repos/huggingface/datasets/issues/5472/events | https://github.com/huggingface/datasets/pull/5472 | 1,558,662,251 | PR_kwDODunzps5Inlp8 | 5,472 | Release: 2.9.0 | [] | closed | false | null | 4 | 2023-01-26T19:29:42Z | 2023-01-26T19:40:44Z | 2023-01-26T19:33:00Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5472/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5472/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5472.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5472",
"merged_at": "2023-01-26T19:33:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5472.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5472"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4194/comments | https://api.github.com/repos/huggingface/datasets/issues/4194/events | https://github.com/huggingface/datasets/pull/4194 | 1,210,958,602 | PR_kwDODunzps42jjD3 | 4,194 | Support lists of multi-dimensional numpy arrays | [] | closed | false | null | 1 | 2022-04-21T12:22:26Z | 2022-05-12T15:16:34Z | 2022-05-12T15:08:40Z | null | Fix #4191.
CC: @SaulLu | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4194/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4194/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4194.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4194",
"merged_at": "2022-05-12T15:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4194.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4194"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2459 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2459/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2459/comments | https://api.github.com/repos/huggingface/datasets/issues/2459/events | https://github.com/huggingface/datasets/issues/2459 | 915,222,015 | MDU6SXNzdWU5MTUyMjIwMTU= | 2,459 | `Proto_qa` hosting seems to be broken | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-06-08T16:16:32Z | 2021-06-10T08:31:09Z | 2021-06-10T08:31:09Z | null | ## Describe the bug
The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now.
@zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py`
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("proto_qa")
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset
use_auth_token=use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators
train_fpath = dl_manager.download(_URLs[self.config.name]["train"])
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download
num_proc=download_config.num_proc,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested
return function(data_struct)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2459/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2459/timeline | null | completed | null | null | false | [
"@VictorSanh , I think @mariosasko is already working on it. "
] |
https://api.github.com/repos/huggingface/datasets/issues/1157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1157/comments | https://api.github.com/repos/huggingface/datasets/issues/1157/events | https://github.com/huggingface/datasets/pull/1157 | 757,657,888 | MDExOlB1bGxSZXF1ZXN0NTMzMDAwNDQy | 1,157 | Add dataset XhosaNavy English -Xhosa | [] | closed | false | null | 0 | 2020-12-05T11:19:54Z | 2020-12-07T09:11:33Z | 2020-12-07T09:11:33Z | null | Add dataset XhosaNavy English -Xhosa
More info : http://opus.nlpl.eu/XhosaNavy.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1157/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1157",
"merged_at": "2020-12-07T09:11:33Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1157"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6020/comments | https://api.github.com/repos/huggingface/datasets/issues/6020/events | https://github.com/huggingface/datasets/issues/6020 | 1,799,720,536 | I_kwDODunzps5rRY5Y | 6,020 | Inconsistent "The features can't be aligned" error when combining map, multiprocessing, and variable length outputs | [] | open | false | null | 1 | 2023-07-11T20:40:38Z | 2023-07-12T15:58:24Z | null | null | ### Describe the bug
I'm using a dataset with map and multiprocessing to run a function that returned a variable length list of outputs. This output list may be empty. Normally this is handled fine, but there is an edge case that crops up when using multiprocessing. In some cases, an empty list result ends up in a dataset shard consisting of a single item. This results in a `The features can't be aligned` error that is difficult to debug because it depends on the number of processes/shards used.
I've reproduced a minimal example below. My current workaround is to fill empty results with a dummy value that I filter after, but this was a weird error that took a while to track down.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_list([{'idx':i} for i in range(60)])
def test_func(row, idx):
if idx==58:
return {'output': []}
else:
return {'output' : [{'test':1}, {'test':2}]}
# this works fine
test1 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=4)
# this fails
test2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=True, num_proc=32)
>ValueError: The features can't be aligned because the key output of features {'idx': Value(dtype='int64', id=None), 'output': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None)} has unexpected type - Sequence(feature=Value(dtype='null', id=None), length=-1, id=None) (expected either [{'test': Value(dtype='int64', id=None)}] or Value("null").
```
The error occurs during the check
```python
_check_if_features_can_be_aligned([dset.features for dset in dsets])
```
When the multiprocessing splitting lines up just right with the empty return value, one of the `dset` in `dsets` will have a single item with an empty list value, causing the error.
### Expected behavior
Expected behavior is the result would be the same regardless of the `num_proc` value used.
### Environment info
Datasets version 2.11.0
Python 3.9.16 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6020/timeline | null | null | null | null | false | [
"This scenario currently requires explicitly passing the target features (to avoid the error): \r\n```python\r\nimport datasets\r\n\r\n...\r\n\r\nfeatures = dataset.features\r\nfeatures[\"output\"] = = [{\"test\": datasets.Value(\"int64\")}]\r\ntest2 = dataset.map(lambda row, idx: test_func(row, idx), with_indices=... |
https://api.github.com/repos/huggingface/datasets/issues/3426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3426/comments | https://api.github.com/repos/huggingface/datasets/issues/3426/events | https://github.com/huggingface/datasets/pull/3426 | 1,078,670,031 | PR_kwDODunzps4vxEN5 | 3,426 | Update disaster_response_messages download urls (+ add validation split) | [] | closed | false | null | 0 | 2021-12-13T15:30:12Z | 2021-12-14T14:38:30Z | 2021-12-14T14:38:29Z | null | Fixes #3240, fixes #3416 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3426/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3426.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3426",
"merged_at": "2021-12-14T14:38:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3426.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3426"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5967 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5967/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5967/comments | https://api.github.com/repos/huggingface/datasets/issues/5967/events | https://github.com/huggingface/datasets/issues/5967 | 1,763,926,520 | I_kwDODunzps5pI2H4 | 5,967 | Config name / split name lost after map with multiproc | [] | open | false | null | 2 | 2023-06-19T17:27:36Z | 2023-06-28T08:55:25Z | null | null | ### Describe the bug
Performing a `.map` method on a dataset loses it's config name / split name only if run with multiproc
### Steps to reproduce the bug
```python
from datasets import Audio, load_dataset
from transformers import AutoFeatureExtractor
import numpy as np
# load dummy dataset
libri = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean")
# make train / test splits
libri = libri["validation"].train_test_split(seed=42, shuffle=True, test_size=0.1)
# example feature extractor
model_id = "ntu-spml/distilhubert"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_id, do_normalize=True, return_attention_mask=True)
sampling_rate = feature_extractor.sampling_rate
libri = libri.cast_column("audio", Audio(sampling_rate=sampling_rate))
max_duration = 30.0
def preprocess_function(examples):
audio_arrays = [x["array"] for x in examples["audio"]]
inputs = feature_extractor(
audio_arrays,
sampling_rate=feature_extractor.sampling_rate,
max_length=int(feature_extractor.sampling_rate * max_duration),
truncation=True,
return_attention_mask=True,
)
return inputs
# single proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=1
)
print(10 * "=" ,"Single processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
# multi proc map
libri_encoded = libri.map(
preprocess_function, remove_columns=["audio", "file"], batched=True, num_proc=2
)
print(10 * "=" ,"Multi processing", 10 * "=")
print("Config name before: ", libri["train"].config_name, " Split name before: ", libri["train"].split)
print("Config name after: ", libri_encoded["train"].config_name, " Split name after: ", libri_encoded["train"].split)
```
**Print Output:**
```
========== Single processing ==========
Config name before: clean Split name before: validation
Config name after: clean Split name after: validation
========== Multi processing ==========
Config name before: clean Split name before: validation
Config name after: None Split name after: None
```
=> we can see that the config/split names are lost in the multiprocessing setting
### Expected behavior
Should retain both config / split names in the multiproc setting
### Environment info
- `datasets` version: 2.13.1.dev0
- Platform: Linux-5.15.0-67-generic-x86_64-with-glibc2.35
- Python version: 3.10.6
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5967/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5967/timeline | null | null | null | null | false | [
"This must be due to DatasetInfo.from_merge which drops them and is used in `concatenate_datasets`.\r\n\r\nAnd you're experiencing this issue because multiprocessing does concatenate the resulting datasets from each process.\r\n\r\nMaybe they should be kept if all the subdatasets share the same values for config_na... |
https://api.github.com/repos/huggingface/datasets/issues/566 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/566/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/566/comments | https://api.github.com/repos/huggingface/datasets/issues/566/events | https://github.com/huggingface/datasets/pull/566 | 691,160,208 | MDExOlB1bGxSZXF1ZXN0NDc3OTM2NTIz | 566 | Remove logger pickling to fix gg colab issues | [] | closed | false | null | 0 | 2020-09-02T16:16:21Z | 2020-09-03T16:31:53Z | 2020-09-03T16:31:52Z | null | A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.
It creates some issues in google colab right now.
Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in an error (full stacktrace [here](http://pastebin.fr/64330)):
```python
/usr/local/lib/python3.6/dist-packages/zmq/backend/cython/socket.cpython-36m-x86_64-linux-gnu.so in zmq.backend.cython.socket.Socket.__reduce_cython__()
TypeError: no default __reduce__ due to non-trivial __cinit__
```
To fix that I no longer dump the transform (`_map_single`, `select`, etc.), but the full name only (`nlp.arrow_dataset.Dataset._map_single`, `nlp.arrow_dataset.Dataset.select`, etc.) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/566/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/566/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/566.diff",
"html_url": "https://github.com/huggingface/datasets/pull/566",
"merged_at": "2020-09-03T16:31:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/566.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/566"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3191/comments | https://api.github.com/repos/huggingface/datasets/issues/3191/events | https://github.com/huggingface/datasets/issues/3191 | 1,041,225,111 | I_kwDODunzps4-D9WX | 3,191 | Dataset viewer issue for '*compguesswhat*' | [
{
"color": "fef2c0",
"default": false,
"description": "",
"id": 3287858981,
"name": "streaming",
"node_id": "MDU6TGFiZWwzMjg3ODU4OTgx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/streaming"
}
] | closed | false | null | 4 | 2021-11-01T14:16:49Z | 2022-09-12T08:02:29Z | 2022-09-12T08:02:29Z | null | ## Dataset viewer issue for '*compguesswhat*'
**Link:** https://huggingface.co/datasets/compguesswhat
File not found
Am I the one who added this dataset ? No
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3191/timeline | null | completed | null | null | false | [
"```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.ve... |
https://api.github.com/repos/huggingface/datasets/issues/5201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5201/comments | https://api.github.com/repos/huggingface/datasets/issues/5201/events | https://github.com/huggingface/datasets/pull/5201 | 1,435,881,554 | PR_kwDODunzps5CM0zn | 5,201 | Do not sort splits in dataset info | [] | closed | false | null | 5 | 2022-11-04T10:47:21Z | 2022-11-04T14:47:37Z | 2022-11-04T14:45:09Z | null | I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws
What do you think?
But I added sorting in tests to fix CI (for the same dataset). | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5201/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5201/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5201.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5201",
"merged_at": "2022-11-04T14:45:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5201.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5201"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"It would be coherent with https://github.com/huggingface/datasets-server/issues/614#issuecomment-1290534153",
"I think we started working on this issue nearly at the same time... :sweat_smile: \r\n- CI was fixed with this: https://... |
https://api.github.com/repos/huggingface/datasets/issues/1113 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1113/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1113/comments | https://api.github.com/repos/huggingface/datasets/issues/1113/events | https://github.com/huggingface/datasets/pull/1113 | 757,115,557 | MDExOlB1bGxSZXF1ZXN0NTMyNTQ1Mzg2 | 1,113 | add qed | [] | closed | false | null | 0 | 2020-12-04T13:47:57Z | 2020-12-05T15:46:21Z | 2020-12-05T15:41:57Z | null | adding QED: Dataset for Explanations in Question Answering
https://github.com/google-research-datasets/QED
https://arxiv.org/abs/2009.06354 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1113/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1113/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1113.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1113",
"merged_at": "2020-12-05T15:41:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1113.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1113"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2775 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2775/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2775/comments | https://api.github.com/repos/huggingface/datasets/issues/2775/events | https://github.com/huggingface/datasets/issues/2775 | 964,303,626 | MDU6SXNzdWU5NjQzMDM2MjY= | 2,775 | `generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 3 | 2021-08-09T19:28:51Z | 2021-08-26T08:30:54Z | null | null | ## Describe the bug
**Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_seed()` has been called, and I think that using `set_seed()` is a standard procedure to aid reproducibility. I've added more details to reproduce this below.
Hi there! I'm using my own local dataset and custom preprocessing function. My preprocessing function seems to be unpickle-able, perhaps because it is from a closure (will debug this separately). I get this warning, which is expected:
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L260-L265
However, what's not expected is that the `datasets` actually _does_ seem to cache and reuse this dataset between runs! After that line, the next thing that's logged looks like:
```text
Loading cached processed dataset at /home/xxx/.cache/huggingface/datasets/csv/default-xxx/0.0.0/xxx/cache-xxx.arrow
```
The path is exactly the same each run (e.g., last 26 runs).
This becomes a problem because I'll pass in the `--max_eval_samples` flag to the HuggingFace example script I'm running off of ([run_swag.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py)). The fact that the cached dataset is reused means this flag gets ignored. I'll try to load 100 examples, and it will load the full cached 1,000,000.
I think that
https://github.com/huggingface/datasets/blob/450b9174765374111e5c6daab0ed294bc3d9b639/src/datasets/fingerprint.py#L248
... is actually consistent because randomness is being controlled in HuggingFace/Transformers for reproducibility. I've added a demo of this below.
## Steps to reproduce the bug
```python
# Contents of print_fingerprint.py
from transformers import set_seed
from datasets.fingerprint import generate_random_fingerprint
set_seed(42)
print(generate_random_fingerprint())
```
```bash
for i in {0..10}; do
python print_fingerprint.py
done
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
1c80317fa3b1799d
```
## Expected results
After the "random hash" warning is emitted, a random hash is generated, and no outdated cached datasets are reused.
## Actual results
After the "random hash" warning is emitted, an identical hash is generated each time, and an outdated cached dataset is reused each run.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.8.0-1038-gcp-x86_64-with-glibc2.31
- Python version: 3.9.6
- PyArrow version: 4.0.1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2775/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2775/timeline | null | null | null | null | false | [
"I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo",
"Hi !\r\n\r\nIMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RN... |
https://api.github.com/repos/huggingface/datasets/issues/5035 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5035/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5035/comments | https://api.github.com/repos/huggingface/datasets/issues/5035/events | https://github.com/huggingface/datasets/pull/5035 | 1,388,914,476 | PR_kwDODunzps4_wVie | 5,035 | Fix typos in load docstrings and comments | [] | closed | false | null | 1 | 2022-09-28T08:05:07Z | 2022-09-28T17:28:40Z | 2022-09-28T17:26:15Z | null | Minor fix of typos in load docstrings and comments | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5035/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5035/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"merged_at": "2022-09-28T17:26:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2416 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2416/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2416/comments | https://api.github.com/repos/huggingface/datasets/issues/2416/events | https://github.com/huggingface/datasets/pull/2416 | 903,932,299 | MDExOlB1bGxSZXF1ZXN0NjU1MTM3NDUy | 2,416 | Add KLUE dataset | [] | closed | false | null | 7 | 2021-05-27T15:49:51Z | 2021-06-09T15:00:02Z | 2021-06-04T17:45:15Z | null | Add `KLUE (Korean Language Understanding Evaluation)` dataset released recently from [paper](https://arxiv.org/abs/2105.09680), [github](https://github.com/KLUE-benchmark/KLUE) and [webpage](https://klue-benchmark.com/tasks).
Please let me know if there's anything missing in the code or README.
Thanks!
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2416/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2416/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2416.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2416",
"merged_at": "2021-06-04T17:45:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2416.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2416"
} | true | [
"I'm not sure why I got error like below when I auto-generate dummy data \"mrc\" \r\n```\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 0\r\nKeys should be unique and deterministic in nature\r\n```",
"> I'm not sure why I got error like below when I auto-generate du... |
https://api.github.com/repos/huggingface/datasets/issues/1915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1915/comments | https://api.github.com/repos/huggingface/datasets/issues/1915/events | https://github.com/huggingface/datasets/issues/1915 | 812,229,654 | MDU6SXNzdWU4MTIyMjk2NTQ= | 1,915 | Unable to download `wiki_dpr` | [] | closed | false | null | 3 | 2021-02-19T18:11:32Z | 2021-03-03T17:40:48Z | 2021-03-03T17:40:48Z | null | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}`
I tried adding in flags `with_embeddings=False` and `with_index=False`:
`curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")`
But I got the following error:
`raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums)))
datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}`
Is there anything else I need to set to download the dataset?
**UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1915/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1915/timeline | null | completed | null | null | false | [
"Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix",
"I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !",
"Closing since this... |
https://api.github.com/repos/huggingface/datasets/issues/3375 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3375/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3375/comments | https://api.github.com/repos/huggingface/datasets/issues/3375/events | https://github.com/huggingface/datasets/pull/3375 | 1,070,454,913 | PR_kwDODunzps4vWrXz | 3,375 | Support streaming zipped dataset repo by passing only repo name | [] | closed | false | null | 6 | 2021-12-03T10:43:05Z | 2021-12-16T18:03:32Z | 2021-12-16T18:03:31Z | null | Proposed solution:
- I have added the method `iter_files` to DownloadManager and StreamingDownloadManager
- I use this in modules: "csv", "json", "text"
- I test for CSV/JSONL/TXT zipped (and non-zipped) files, both in streaming and non-streaming modes
Fix #3373. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3375/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3375/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3375.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3375",
"merged_at": "2021-12-16T18:03:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3375.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3375"
} | true | [
"I just tested and I think this only opens one file ? If there are several files in the ZIP, only the first one is opened. To open several files from a ZIP, one has to call `open` several times.\r\n\r\nWhat about updating the CSV loader to make it `download_and_extract` zip files, and open each extracted file ?",
... |
https://api.github.com/repos/huggingface/datasets/issues/3143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3143/comments | https://api.github.com/repos/huggingface/datasets/issues/3143/events | https://github.com/huggingface/datasets/issues/3143 | 1,033,569,655 | I_kwDODunzps49mwV3 | 3,143 | Provide a way to check if the features (in info) match with the data of a split | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": fals... | open | false | null | 1 | 2021-10-22T13:13:36Z | 2021-10-22T13:17:56Z | null | null | **Is your feature request related to a problem? Please describe.**
I understand that currently the data loaded has not always the type described in the info features
**Describe the solution you'd like**
Provide a way to check if the rows have the type described by info features
**Describe alternatives you've considered**
Always check it, and raise an error when loading the data if their type doesn't match the features.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3143/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3143/timeline | null | null | null | null | false | [
"Related: #3144 "
] |
https://api.github.com/repos/huggingface/datasets/issues/1164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1164/comments | https://api.github.com/repos/huggingface/datasets/issues/1164/events | https://github.com/huggingface/datasets/pull/1164 | 757,716,575 | MDExOlB1bGxSZXF1ZXN0NTMzMDQyMjA1 | 1,164 | Add DaNe dataset | [] | closed | false | null | 1 | 2020-12-05T16:36:50Z | 2020-12-08T12:50:18Z | 2020-12-08T12:49:55Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1164/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1164",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1164"
} | true | [
"Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.