url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
600M
2.05B
node_id
stringlengths
18
32
number
int64
2
6.51k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
timestamp[ns, tz=UTC]
updated_at
timestamp[ns, tz=UTC]
closed_at
timestamp[ns, tz=UTC]
author_association
stringclasses
3 values
active_lock_reason
float64
draft
float64
0
1
pull_request
dict
body
stringlengths
0
228k
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
float64
state_reason
stringclasses
3 values
is_pull_request
bool
2 classes
https://api.github.com/repos/huggingface/datasets/issues/6439
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6439/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6439/comments
https://api.github.com/repos/huggingface/datasets/issues/6439/events
https://github.com/huggingface/datasets/issues/6439
2,002,916,514
I_kwDODunzps53YhSi
6,439
Download + preparation speed of datasets.load_dataset is 20x slower than huggingface hub snapshot and manual loding
{ "avatar_url": "https://avatars.githubusercontent.com/u/10792502?v=4", "events_url": "https://api.github.com/users/AntreasAntoniou/events{/privacy}", "followers_url": "https://api.github.com/users/AntreasAntoniou/followers", "following_url": "https://api.github.com/users/AntreasAntoniou/following{/other_user}", "gists_url": "https://api.github.com/users/AntreasAntoniou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AntreasAntoniou", "id": 10792502, "login": "AntreasAntoniou", "node_id": "MDQ6VXNlcjEwNzkyNTAy", "organizations_url": "https://api.github.com/users/AntreasAntoniou/orgs", "received_events_url": "https://api.github.com/users/AntreasAntoniou/received_events", "repos_url": "https://api.github.com/users/AntreasAntoniou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AntreasAntoniou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AntreasAntoniou/subscriptions", "type": "User", "url": "https://api.github.com/users/AntreasAntoniou" }
[]
open
false
null
[]
null
[]
2023-11-20T20:07:23Z
2023-11-20T20:07:37Z
null
NONE
null
null
null
### Describe the bug I am working with a dataset I am trying to publish. The path is Antreas/TALI. It's a fairly large dataset, and contains images, video, audio and text. I have been having multiple problems when the dataset is being downloaded using the load_dataset function -- even with 64 workers taking more than 7 days to process. With snapshot download it takes 12 hours, and that includes the dataset preparation done using load_dataset and passing the dataset parquet file paths. Find the script I am using below: ```python import multiprocessing as mp import pathlib from typing import Optional import datasets from rich import print from tqdm import tqdm def download_dataset_via_hub( dataset_name: str, dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), ): import huggingface_hub as hf_hub download_folder = hf_hub.snapshot_download( repo_id=dataset_name, repo_type="dataset", cache_dir=dataset_download_path, resume_download=True, max_workers=num_download_workers, ignore_patterns=[], ) return pathlib.Path(download_folder) / "data" def load_dataset_via_hub( dataset_download_path: pathlib.Path, num_download_workers: int = mp.cpu_count(), dataset_name: Optional[str] = None, ): from dataclasses import dataclass, field from datasets import ClassLabel, Features, Image, Sequence, Value dataset_path = download_dataset_via_hub( dataset_download_path=dataset_download_path, num_download_workers=num_download_workers, dataset_name=dataset_name, ) # Building a list of file paths for validation set train_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "train" in file.as_posix() ] val_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "val" in file.as_posix() ] test_files = [ file.as_posix() for file in pathlib.Path(dataset_path).glob("*.parquet") if "test" in file.as_posix() ] print( f"Found {len(test_files)} files for testing set, {len(train_files)} for training set and {len(val_files)} for validation set" ) data_files = { "test": test_files, "val": val_files, "train": train_files, } features = Features( { "image": Image( decode=True ), # Set `decode=True` if you want to decode the images, otherwise `decode=False` "image_url": Value("string"), "item_idx": Value("int64"), "wit_features": Sequence( { "attribution_passes_lang_id": Value("bool"), "caption_alt_text_description": Value("string"), "caption_reference_description": Value("string"), "caption_title_and_reference_description": Value("string"), "context_page_description": Value("string"), "context_section_description": Value("string"), "hierarchical_section_title": Value("string"), "is_main_image": Value("bool"), "language": Value("string"), "page_changed_recently": Value("bool"), "page_title": Value("string"), "page_url": Value("string"), "section_title": Value("string"), } ), "wit_idx": Value("int64"), "youtube_title_text": Value("string"), "youtube_description_text": Value("string"), "youtube_video_content": Value("binary"), "youtube_video_starting_time": Value("string"), "youtube_subtitle_text": Value("string"), "youtube_video_size": Value("int64"), "youtube_video_file_path": Value("string"), } ) dataset = datasets.load_dataset( "parquet" if dataset_name is None else dataset_name, data_files=data_files, features=features, num_proc=1, cache_dir=dataset_download_path / "cache", ) return dataset if __name__ == "__main__": dataset_cache = pathlib.Path("/disk/scratch_fast0/tali/") dataset = load_dataset_via_hub(dataset_cache, dataset_name="Antreas/TALI")[ "test" ] for sample in tqdm(dataset): print(list(sample.keys())) ``` Also, streaming this dataset has been a very painfully slow process. Streaming the train set takes 15m to start, and streaming the test and val sets takes 3 hours to start! ### Steps to reproduce the bug 1. Run the code I provided to get a sense of how fast snapshot + manual is 2. Run datasets.load_dataset("Antreas/TALI") to get a sense of the speed of that OP. 3. You should now have an appreciation of how long these things take. ### Expected behavior The load dataset function should be at least as fast as the huggingface snapshot download function in terms of downloading dataset files. Not 20 times slower. ### Environment info - `datasets` version: 2.14.5 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.35 - Python version: 3.10.13 - Huggingface_hub version: 0.17.3 - PyArrow version: 13.0.0 - Pandas version: 2.1.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6439/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6439/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/4456
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4456/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4456/comments
https://api.github.com/repos/huggingface/datasets/issues/4456/events
https://github.com/huggingface/datasets/issues/4456
1,263,241,449
I_kwDODunzps5LS4jp
4,456
Workflow for Tabular data
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
open
false
null
[]
null
[ "I use below to load a dataset:\r\n```\r\ndataset = datasets.load_dataset(\"scikit-learn/auto-mpg\")\r\ndf = pd.DataFrame(dataset[\"train\"])\r\n```\r\nTBH as said, tabular folk split their own dataset, they sometimes have two splits, sometimes three. Maybe somehow avoiding it for tabular datasets might be good for later. (it's just UX improvement) ", "is very slow batch access of a dataset (tabular, csv) with many columns to be expected?", "Define \"many\" ? x)", "~20k! I was surprised batch loading with as few as 32 samples was really slow. I was speculating the columnar format was the cause -- or do you see good performance with this approx size of tabular data?", "20k can be a lot for a columnar format but maybe we can optimize a few things.\r\n\r\nIt would be cool to profile the code to see if there's an unoptimized part of the code that slows everything down.\r\n\r\n(it's also possible to kill the job when it accesses the batch, it often gives you the traceback at the location where the code was running)", "FWIW I've worked with tabular data with 540k columns.", "thats awesome, whats your secret? would love to see an example!", "@wconnell I'm not sure what you mean by my secret, I load them into a numpy array 😁 \r\n\r\nAn example dataset is [here](https://portal.gdc.cancer.gov/repository?facetTab=files&filters=%7B%22content%22%3A%5B%7B%22content%22%3A%7B%22field%22%3A%22cases.project.project_id%22%2C%22value%22%3A%5B%22TCGA-CESC%22%5D%7D%2C%22op%22%3A%22in%22%7D%2C%7B%22content%22%3A%7B%22field%22%3A%22files.data_category%22%2C%22value%22%3A%5B%22DNA%20Methylation%22%5D%7D%2C%22op%22%3A%22in%22%7D%5D%2C%22op%22%3A%22and%22%7D&searchTableTab=files) which is a dataset of DNA methylation reads. This dataset is about 950 rows and 450k columns. " ]
2022-06-07T12:48:22Z
2023-03-06T08:53:55Z
null
MEMBER
null
null
null
Tabular data are treated very differently than data for NLP, audio, vision, etc. and therefore the worflow for tabular data in `datasets` is not ideal. For example for tabular data, it is common to use pandas/spark/dask to process the data, and then load the data into X and y (X is an array of features and y an array of labels), then train_test_split and finally feed the data to a machine learning model. In `datasets` the workflow is different: we use load_dataset, then map, then train_test_split (if we only have a train split) and we end up with columnar dataset splits, not formatted as X and y. Right now, it is already possible to convert a dataset from and to pandas, but there are still many things that could improve the workflow for tabular data: - be able to load the data into X and y - be able to load a dataset from the output of spark or dask (as far as I know it's usually csv or parquet files on S3/GCS/HDFS etc.) - support "unsplit" datasets explicitly, instead of putting everything in "train" by default cc @adrinjalali @merveenoyan feel free to complete/correct this :) Feel free to also share ideas of APIs that would be super intuitive in your opinion !
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 1, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/4456/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4456/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/3441
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3441/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3441/comments
https://api.github.com/repos/huggingface/datasets/issues/3441/events
https://github.com/huggingface/datasets/issues/3441
1,081,571,784
I_kwDODunzps5Ad3nI
3,441
Add QuALITY dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
open
false
null
[]
null
[ "I'll take this one if no one hasn't yet!" ]
2021-12-15T22:26:19Z
2021-12-28T15:17:05Z
null
MEMBER
null
null
null
## Adding a Dataset - **Name:** QuALITY - **Description:** A challenging question answering with very long contexts (Twitter [thread](https://twitter.com/sleepinyourhat/status/1471225421794529281?s=20)) - **Paper:** No ArXiv link yet, but draft is [here](https://github.com/nyu-mll/quality/blob/main/quality_preprint.pdf) - **Data:** GitHub repo [here](https://github.com/nyu-mll/quality) - **Motivation:** This dataset would serve as a nice way to benchmark long-range Transformer models like BigBird, Longformer and their descendants. In particular, it would be very interesting to see how the S4 model fares on this given it's impressive performance on the Long Range Arena Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3441/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3441/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/657
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/657/comments
https://api.github.com/repos/huggingface/datasets/issues/657/events
https://github.com/huggingface/datasets/issues/657
706,204,383
MDU6SXNzdWU3MDYyMDQzODM=
657
Squad Metric Description & Feature Mismatch
{ "avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4", "events_url": "https://api.github.com/users/tshrjn/events{/privacy}", "followers_url": "https://api.github.com/users/tshrjn/followers", "following_url": "https://api.github.com/users/tshrjn/following{/other_user}", "gists_url": "https://api.github.com/users/tshrjn/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tshrjn", "id": 8372098, "login": "tshrjn", "node_id": "MDQ6VXNlcjgzNzIwOTg=", "organizations_url": "https://api.github.com/users/tshrjn/orgs", "received_events_url": "https://api.github.com/users/tshrjn/received_events", "repos_url": "https://api.github.com/users/tshrjn/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tshrjn/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tshrjn/subscriptions", "type": "User", "url": "https://api.github.com/users/tshrjn" }
[]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `references=squad[\"answers\"]` to `.compute()`.\r\nMaybe we can just fix the description then.", "But then providing the `answer_start` becomes mandatory since the format of the features is checked against the one provided in the squad [file](https://github.com/huggingface/datasets/pull/658/files)." ]
2020-09-22T09:07:00Z
2020-10-13T02:16:56Z
2020-09-29T15:57:38Z
NONE
null
null
null
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/657/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/657/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4280
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4280/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4280/comments
https://api.github.com/repos/huggingface/datasets/issues/4280/events
https://github.com/huggingface/datasets/pull/4280
1,225,446,844
PR_kwDODunzps43S2xg
4,280
Add missing features to commonsense_qa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "@albertvillanova it adds question_concept and id which is great. I suppose we'll talk about staying true to the format on another PR. ", "Yes, let's merge this PR as it is: it adds missing features.\r\n\r\nA subsequent PR may address the request on changing the dataset feature structure." ]
2022-05-04T14:24:26Z
2022-05-06T14:23:57Z
2022-05-06T14:16:46Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4280.diff", "html_url": "https://github.com/huggingface/datasets/pull/4280", "merged_at": "2022-05-06T14:16:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/4280.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4280" }
Fix partially #4275.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4280/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4280/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5419/comments
https://api.github.com/repos/huggingface/datasets/issues/5419/events
https://github.com/huggingface/datasets/issues/5419
1,531,999,850
I_kwDODunzps5bUHZq
5,419
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator
{ "avatar_url": "https://avatars.githubusercontent.com/u/172385?v=4", "events_url": "https://api.github.com/users/CreatixEA/events{/privacy}", "followers_url": "https://api.github.com/users/CreatixEA/followers", "following_url": "https://api.github.com/users/CreatixEA/following{/other_user}", "gists_url": "https://api.github.com/users/CreatixEA/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/CreatixEA", "id": 172385, "login": "CreatixEA", "node_id": "MDQ6VXNlcjE3MjM4NQ==", "organizations_url": "https://api.github.com/users/CreatixEA/orgs", "received_events_url": "https://api.github.com/users/CreatixEA/received_events", "repos_url": "https://api.github.com/users/CreatixEA/repos", "site_admin": false, "starred_url": "https://api.github.com/users/CreatixEA/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/CreatixEA/subscriptions", "type": "User", "url": "https://api.github.com/users/CreatixEA" }
[]
closed
false
null
[]
null
[ "Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_index` field stored in the YAML section of the dataset cards.", "The task templates API has been deprecated (will be removed in version 3.0), so I'm closing this issue." ]
2023-01-13T09:40:07Z
2023-07-21T14:27:08Z
2023-07-21T14:27:08Z
NONE
null
null
null
### Describe the bug When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem. It is required to rename the column accordingly to the expected name : `label` or `label_ids` ### Steps to reproduce the bug ```python from datasets import TextClassification, AutoTokenized, DataCollatorWithPadding ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')) print(ds_prepared) tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") ds_tokenized = ds_prepared.map(lambda x: tokenizer(x['text'], truncation=True), batched=True) print(ds_tokenized) data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf") tf_data = model.prepare_tf_dataset(ds_tokenized, shuffle=True, batch_size=16, collate_fn=data_collator) print(tf_data) ``` ### Expected behavior Without renaming the the column, the target column is not in the final tf_data since it is not in the column name expected by the data_collator. To correct this, we have to rename the column: ```python ds_prepared = my_dataset.prepare_for_task(TextClassification(text_column='TEXT', label_column='MY_LABEL_COLUMN_1_OR_0')).rename_column('labels', 'label') ``` ### Environment info - `datasets` version: 2.8.0 - Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - PyArrow version: 10.0.1 - Pandas version: 1.5.2 - `transformers` version: 4.26.0.dev0 - Platform: Linux-5.15.79.1-microsoft-standard-WSL2-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.11.1 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.11.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in>
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/5419/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5419/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2245
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2245/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2245/comments
https://api.github.com/repos/huggingface/datasets/issues/2245/events
https://github.com/huggingface/datasets/pull/2245
863,191,655
MDExOlB1bGxSZXF1ZXN0NjE5NjQzMjQ3
2,245
Add `key` type and duplicates verification with hashing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
[]
closed
false
null
[]
null
[ "@lhoestq The tests for key type and duplicate keys have been added and verified successfully.\r\nAfter generating with an intentionally faulty `mnist` script, when there is an incompatible key type, it shows:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:50:03.703836: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\n\r\nFAILURE TO GENERATE DATASET: Invalid key type detected\r\nFound Key [0, 0] of type <class 'list'>\r\nKeys should be either str, int or bytes type\r\n```\r\n\r\nIn the case of duplicate keys, it now gives:\r\n\r\n```\r\nDownloading and preparing dataset mnist/mnist (download: 11.06 MiB, generated: 60.62 MiB, post-processed: Unknown size, total: 71.67 MiB) to C:\\Users\\nikhil\\.cache\\huggingface\\datasets\\mnist\\mnist\\1.0.0\\5064c25e57a1678f700d2dc798ffe8a6d519405cca7d33670fffda477857a994...\r\n0 examples [00:00, ? examples/s]2021-04-26 02:53:13.498579: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\load.py\", line 746, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 587, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 665, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\builder.py\", line 1002, in _prepare_split\r\n writer.write(example, key)\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 321, in write\r\n self.check_duplicates()\r\n File \"f:\\datasets\\datasets-1\\src\\datasets\\arrow_writer.py\", line 331, in check_duplicates\r\n raise DuplicatedKeysError(key)\r\ndatasets.keyhash.DuplicatedKeysError: FAILURE TO GENERATE DATASET !\r\nFound duplicate Key: 234467\r\nKeys should be unique and deterministic in nature\r\n```\r\nPlease let me know if this is what we wanted to implement. Thanks!", "This looks pretty cool !\r\nWe can make focus on the GeneratorBasedBuilder for now yes.\r\n\r\nDo you think we could make the ArrowWriter not look for duplicates by default ?\r\nThis way we can just enable duplicate detections when instantiating the writer in the GeneratorBasedBuilder for now.", "Thank you @lhoestq\r\n\r\n\r\n\r\n> Do you think we could make the ArrowWriter not look for duplicates by default ?\r\n\r\nWe can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`. \r\n\r\nHowever, since only `GeneratorBasedBuilder` uses the `write()` function (which includes the detection code) and the others like `ArrowBasedBuilder` use `write_table()` which remains as it was (without duplicate detection). I don't think it would be necessary.\r\n\r\nNonetheless, doing this would require just some small changes. Please let me know your thoughts on this. Thanks!", "I like the idea of having the duplicate detection optional for other uses of the ArrowWriter.\r\nThis class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\nThat's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nAn alternative would be to subclass the writer to include duplicates detection in another class.\r\n\r\nBoth options are fine for me, let me know what you think !", "> This class is the main tool to write python data in arrow format so I'd expect it to be flexible.\r\n> That's why I think by default it shouldn't require users to provide keys or do any duplicates detection.\r\n\r\nWell, that makes sense as the writer can indeed be used for other purposes as well.\r\n\r\n> We can definitely do that by including a `check_duplicates` argument while instantiating `ArrowWriter()`.\r\n\r\nI think that this would be the simplest and the more efficient option for achieving this as subclassing the writer only for this would lead to unnecessary complexity and code duplication (in case of `writer()`). \r\n\r\nI will be adding the changes soon. Thanks for the feedback @lhoestq!", "@lhoestq I have pushed the final changes just now. \r\nNow, the keys and duplicate checking will be necessary only when the `ArrowWriter` is initialized with `check_duplicates=True` specifically (in this case, for `GeneratorBasedBuilders`)\r\n\r\nLet me know if this is what was required. Thanks!", "@lhoestq Thanks for the feedback! I will be adding the tests for the same very soon. \r\n\r\nHowever, I'm not sure as to what exactly is causing the `segmentation fault` in the failing CI tests. It seems to be something from `test_concatenation_table_cast` from `test_table.py`, but I'm not sure as to what exactly. Would be great if you could help. Thanks!", "You can merge master into your branch to fix this issue.\r\nBasically pyarrow 4.0.0 has a segfault issue (which has now been resolved on the master branch of pyarrow).\r\nSo until 4.0.1 comes out we changed to using `pyarrow<4.0.0` recently.", "@lhoestq Thanks for the help with the CI failures. Apologies for the multiple merge commits. My local repo got messy while merging which led to this.\r\nWill be pushing the commit for the tests soon!", "Hey @lhoestq, I've just added the required tests for checking key duplicates and invalid key data types.\r\nI think we have caught a nice little issue as 27 datasets are currently using non-unique keys (hence, the failing tests: All these datasets are giving `DuplicateKeysError` during testing). \r\nThese datasets were not detected earlier as there was no key checking when `num_examples < writer_batch_size` due to which they passed the dummy data generation test. This bug was fixed by adding the test to `writer.finalize()` method as well for checking any leftover examples from batches. \r\n\r\nI'd like to make changes to the faulty datasets' scripts. However, I was wondering if I should do that in this PR itself or open a new PR as this might get messy in the same PR. Let me know your thoughts on this. Thanks!", "Hi ! Once https://github.com/huggingface/datasets/pull/2333 is merged, feel free to merge master into your branch to fix the CI :)", "Thanks a lot for the help @lhoestq. Besides merging the new changes, I guess this PR is completed for now :)", "I just merged the PR, feel free to merge `master` into your branch. It should fix most most of the CI issues. If there are some left we can fix them in this PR :)", "@lhoestq Looks like the PR is completed now. Thanks for helping me out so much in this :)", "Hey @lhoestq, I've added the test and corrected the Cl errors as well. Do let me know if this requires any change. Thanks!", "Merging. I'll update the comment on the master branch (for some reason I can edit files on this branch)", "@lhoestq Thank you for the help and feedback. Feels great to contribute!" ]
2021-04-20T20:03:19Z
2021-05-10T18:04:37Z
2021-05-10T17:31:22Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2245.diff", "html_url": "https://github.com/huggingface/datasets/pull/2245", "merged_at": "2021-05-10T17:31:21Z", "patch_url": "https://github.com/huggingface/datasets/pull/2245.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2245" }
Closes #2230 There is currently no verification for the data type and the uniqueness of the keys yielded by the `dataset_builder`. This PR is currently a work in progress with the following goals: - [x] Adding `hash_salt` to `ArrowWriter` so that the keys belonging to different splits have different hash - [x] Add `key` arrtibute to `ArrowWriter.write()` for hashing - [x] Add a hashing class which takes an input key of certain type (`str`/`int`/anything convertible to string) and produces a 128-bit hash using `hashlib.md5` - [x] Creating a function giving a custom error message when non-unique keys are found **[This will take care of type-checking for keys]** - [x] Checking for duplicate keys in `writer.write()` for each batch [**NOTE**: This PR is currently concerned with `GeneratorBasedBuilder` only, for simplification. A subsequent PR will be made in future for `ArrowBasedBuilder`] @lhoestq Thank you for the feedback. It would be great to have your guidance on this!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2245/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2245/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6335
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6335/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6335/comments
https://api.github.com/repos/huggingface/datasets/issues/6335/events
https://github.com/huggingface/datasets/pull/6335
1,956,740,818
PR_kwDODunzps5dggIV
6,335
Support fsspec 2023.10.0
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006013 / 0.011353 (-0.005340) | 0.003647 / 0.011008 (-0.007362) | 0.081781 / 0.038508 (0.043273) | 0.059020 / 0.023109 (0.035911) | 0.321823 / 0.275898 (0.045925) | 0.350159 / 0.323480 (0.026679) | 0.003599 / 0.007986 (-0.004386) | 0.002877 / 0.004328 (-0.001452) | 0.063941 / 0.004250 (0.059690) | 0.049460 / 0.037052 (0.012408) | 0.330185 / 0.258489 (0.071696) | 0.362220 / 0.293841 (0.068379) | 0.027613 / 0.128546 (-0.100934) | 0.007976 / 0.075646 (-0.067670) | 0.263386 / 0.419271 (-0.155885) | 0.045504 / 0.043533 (0.001971) | 0.321172 / 0.255139 (0.066033) | 0.345291 / 0.283200 (0.062091) | 0.023133 / 0.141683 (-0.118550) | 1.435816 / 1.452155 (-0.016339) | 1.557241 / 1.492716 (0.064524) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222228 / 0.018006 (0.204222) | 0.420008 / 0.000490 (0.419518) | 0.008598 / 0.000200 (0.008398) | 0.000343 / 0.000054 (0.000288) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023725 / 0.037411 (-0.013686) | 0.073023 / 0.014526 (0.058497) | 0.814888 / 0.176557 (0.638332) | 0.294122 / 0.737135 (-0.443013) | 0.088945 / 0.296338 (-0.207393) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.393561 / 0.215209 (0.178352) | 3.946544 / 2.077655 (1.868890) | 1.916476 / 1.504120 (0.412356) | 1.721544 / 1.541195 (0.180349) | 1.768583 / 1.468490 (0.300093) | 0.508067 / 4.584777 (-4.076710) | 3.047832 / 3.745712 (-0.697880) | 2.952842 / 5.269862 (-2.317020) | 1.869337 / 4.565676 (-2.696339) | 0.057812 / 0.424275 (-0.366463) | 0.006694 / 0.007607 (-0.000913) | 0.463007 / 0.226044 (0.236963) | 4.635087 / 2.268929 (2.366158) | 2.419833 / 55.444624 (-53.024792) | 2.018519 / 6.876477 (-4.857958) | 2.043430 / 2.142072 (-0.098643) | 0.590895 / 4.805227 (-4.214333) | 0.126113 / 6.500664 (-6.374552) | 0.061045 / 0.075469 (-0.014424) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226850 / 1.841788 (-0.614937) | 17.336630 / 8.074308 (9.262322) | 13.651049 / 10.191392 (3.459656) | 0.143308 / 0.680424 (-0.537116) | 0.016938 / 0.534201 (-0.517263) | 0.332829 / 0.579283 (-0.246454) | 0.368684 / 0.434364 (-0.065680) | 0.385848 / 0.540337 (-0.154489) | 0.546391 / 1.386936 (-0.840545) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006149 / 0.011353 (-0.005204) | 0.003818 / 0.011008 (-0.007191) | 0.064012 / 0.038508 (0.025504) | 0.059846 / 0.023109 (0.036737) | 0.455928 / 0.275898 (0.180030) | 0.480736 / 0.323480 (0.157256) | 0.004874 / 0.007986 (-0.003111) | 0.002877 / 0.004328 (-0.001451) | 0.064195 / 0.004250 (0.059944) | 0.048146 / 0.037052 (0.011094) | 0.452638 / 0.258489 (0.194149) | 0.484339 / 0.293841 (0.190499) | 0.028832 / 0.128546 (-0.099715) | 0.008162 / 0.075646 (-0.067485) | 0.069855 / 0.419271 (-0.349417) | 0.041429 / 0.043533 (-0.002104) | 0.453282 / 0.255139 (0.198143) | 0.473812 / 0.283200 (0.190613) | 0.021186 / 0.141683 (-0.120497) | 1.465207 / 1.452155 (0.013052) | 1.508216 / 1.492716 (0.015500) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.242491 / 0.018006 (0.224485) | 0.421219 / 0.000490 (0.420730) | 0.011201 / 0.000200 (0.011001) | 0.000083 / 0.000054 (0.000028) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027015 / 0.037411 (-0.010396) | 0.080465 / 0.014526 (0.065939) | 0.092622 / 0.176557 (-0.083934) | 0.146111 / 0.737135 (-0.591024) | 0.091546 / 0.296338 (-0.204793) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458351 / 0.215209 (0.243142) | 4.591454 / 2.077655 (2.513799) | 2.508156 / 1.504120 (1.004037) | 2.328771 / 1.541195 (0.787576) | 2.423251 / 1.468490 (0.954761) | 0.508504 / 4.584777 (-4.076273) | 3.133789 / 3.745712 (-0.611923) | 2.862777 / 5.269862 (-2.407084) | 1.886327 / 4.565676 (-2.679350) | 0.058017 / 0.424275 (-0.366258) | 0.006496 / 0.007607 (-0.001111) | 0.529629 / 0.226044 (0.303585) | 5.310338 / 2.268929 (3.041409) | 2.973075 / 55.444624 (-52.471549) | 2.601313 / 6.876477 (-4.275163) | 2.777348 / 2.142072 (0.635275) | 0.593711 / 4.805227 (-4.211516) | 0.125453 / 6.500664 (-6.375211) | 0.061034 / 0.075469 (-0.014435) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.374391 / 1.841788 (-0.467397) | 18.768026 / 8.074308 (10.693718) | 15.053637 / 10.191392 (4.862245) | 0.158253 / 0.680424 (-0.522171) | 0.018126 / 0.534201 (-0.516075) | 0.337427 / 0.579283 (-0.241856) | 0.391678 / 0.434364 (-0.042686) | 0.398524 / 0.540337 (-0.141813) | 0.558629 / 1.386936 (-0.828307) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e0b79660f180c88517884f831eca620bc46a0957 \"CML watermark\")\n", "I think https://github.com/huggingface/datasets/pull/6334 fixes it already no ?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006432 / 0.011353 (-0.004921) | 0.003861 / 0.011008 (-0.007147) | 0.084132 / 0.038508 (0.045624) | 0.069391 / 0.023109 (0.046282) | 0.341081 / 0.275898 (0.065183) | 0.375975 / 0.323480 (0.052495) | 0.003962 / 0.007986 (-0.004024) | 0.003235 / 0.004328 (-0.001094) | 0.064927 / 0.004250 (0.060677) | 0.054190 / 0.037052 (0.017137) | 0.350719 / 0.258489 (0.092230) | 0.393216 / 0.293841 (0.099375) | 0.031002 / 0.128546 (-0.097544) | 0.008416 / 0.075646 (-0.067230) | 0.289268 / 0.419271 (-0.130003) | 0.052167 / 0.043533 (0.008634) | 0.347559 / 0.255139 (0.092420) | 0.370908 / 0.283200 (0.087709) | 0.022540 / 0.141683 (-0.119142) | 1.486297 / 1.452155 (0.034143) | 1.576968 / 1.492716 (0.084252) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.237048 / 0.018006 (0.219042) | 0.452065 / 0.000490 (0.451575) | 0.013963 / 0.000200 (0.013763) | 0.000242 / 0.000054 (0.000188) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028084 / 0.037411 (-0.009327) | 0.081271 / 0.014526 (0.066745) | 0.096490 / 0.176557 (-0.080067) | 0.152106 / 0.737135 (-0.585030) | 0.096174 / 0.296338 (-0.200164) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.386585 / 0.215209 (0.171375) | 3.854996 / 2.077655 (1.777342) | 1.832898 / 1.504120 (0.328778) | 1.662832 / 1.541195 (0.121638) | 1.730753 / 1.468490 (0.262263) | 0.485286 / 4.584777 (-4.099491) | 3.571410 / 3.745712 (-0.174302) | 3.373035 / 5.269862 (-1.896826) | 1.995570 / 4.565676 (-2.570107) | 0.056711 / 0.424275 (-0.367564) | 0.007447 / 0.007607 (-0.000160) | 0.462985 / 0.226044 (0.236941) | 4.617186 / 2.268929 (2.348257) | 2.313915 / 55.444624 (-53.130709) | 1.961697 / 6.876477 (-4.914780) | 1.990410 / 2.142072 (-0.151662) | 0.580536 / 4.805227 (-4.224692) | 0.146275 / 6.500664 (-6.354389) | 0.059458 / 0.075469 (-0.016011) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.274841 / 1.841788 (-0.566947) | 18.641853 / 8.074308 (10.567545) | 13.977525 / 10.191392 (3.786133) | 0.151469 / 0.680424 (-0.528955) | 0.018111 / 0.534201 (-0.516090) | 0.393243 / 0.579283 (-0.186040) | 0.412310 / 0.434364 (-0.022054) | 0.461646 / 0.540337 (-0.078692) | 0.633016 / 1.386936 (-0.753920) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006496 / 0.011353 (-0.004857) | 0.003973 / 0.011008 (-0.007035) | 0.064527 / 0.038508 (0.026019) | 0.069390 / 0.023109 (0.046281) | 0.401162 / 0.275898 (0.125264) | 0.431031 / 0.323480 (0.107551) | 0.005244 / 0.007986 (-0.002741) | 0.003283 / 0.004328 (-0.001046) | 0.064931 / 0.004250 (0.060680) | 0.054402 / 0.037052 (0.017350) | 0.397917 / 0.258489 (0.139428) | 0.436728 / 0.293841 (0.142887) | 0.031932 / 0.128546 (-0.096614) | 0.008557 / 0.075646 (-0.067089) | 0.073336 / 0.419271 (-0.345935) | 0.047559 / 0.043533 (0.004026) | 0.395825 / 0.255139 (0.140686) | 0.423002 / 0.283200 (0.139802) | 0.021708 / 0.141683 (-0.119975) | 1.501140 / 1.452155 (0.048985) | 1.558376 / 1.492716 (0.065660) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.289522 / 0.018006 (0.271516) | 0.449078 / 0.000490 (0.448589) | 0.034174 / 0.000200 (0.033974) | 0.000396 / 0.000054 (0.000342) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032533 / 0.037411 (-0.004878) | 0.093398 / 0.014526 (0.078872) | 0.106930 / 0.176557 (-0.069626) | 0.158743 / 0.737135 (-0.578393) | 0.106904 / 0.296338 (-0.189435) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.427479 / 0.215209 (0.212270) | 4.271758 / 2.077655 (2.194103) | 2.298770 / 1.504120 (0.794650) | 2.134906 / 1.541195 (0.593712) | 2.220487 / 1.468490 (0.751996) | 0.490506 / 4.584777 (-4.094270) | 3.593876 / 3.745712 (-0.151836) | 3.225656 / 5.269862 (-2.044205) | 2.004434 / 4.565676 (-2.561243) | 0.058015 / 0.424275 (-0.366260) | 0.007221 / 0.007607 (-0.000387) | 0.504928 / 0.226044 (0.278884) | 5.049547 / 2.268929 (2.780618) | 2.743843 / 55.444624 (-52.700781) | 2.398399 / 6.876477 (-4.478078) | 2.562939 / 2.142072 (0.420867) | 0.597229 / 4.805227 (-4.207998) | 0.134664 / 6.500664 (-6.366001) | 0.059612 / 0.075469 (-0.015857) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.369692 / 1.841788 (-0.472095) | 19.065326 / 8.074308 (10.991018) | 14.404508 / 10.191392 (4.213116) | 0.175809 / 0.680424 (-0.504615) | 0.020137 / 0.534201 (-0.514064) | 0.394043 / 0.579283 (-0.185240) | 0.424772 / 0.434364 (-0.009592) | 0.475587 / 0.540337 (-0.064751) | 0.644275 / 1.386936 (-0.742661) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#224977971accd63d97ba0a90cc108c4754055ebb \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007259 / 0.011353 (-0.004094) | 0.004396 / 0.011008 (-0.006612) | 0.096456 / 0.038508 (0.057948) | 0.078752 / 0.023109 (0.055643) | 0.359215 / 0.275898 (0.083317) | 0.396927 / 0.323480 (0.073448) | 0.005611 / 0.007986 (-0.002375) | 0.003687 / 0.004328 (-0.000641) | 0.072794 / 0.004250 (0.068544) | 0.059794 / 0.037052 (0.022741) | 0.372352 / 0.258489 (0.113863) | 0.414038 / 0.293841 (0.120197) | 0.034490 / 0.128546 (-0.094056) | 0.009790 / 0.075646 (-0.065857) | 0.326338 / 0.419271 (-0.092934) | 0.058582 / 0.043533 (0.015049) | 0.354221 / 0.255139 (0.099082) | 0.386669 / 0.283200 (0.103469) | 0.025356 / 0.141683 (-0.116327) | 1.664104 / 1.452155 (0.211950) | 1.766825 / 1.492716 (0.274108) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.251107 / 0.018006 (0.233101) | 0.478833 / 0.000490 (0.478344) | 0.010776 / 0.000200 (0.010577) | 0.000292 / 0.000054 (0.000238) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032869 / 0.037411 (-0.004543) | 0.098449 / 0.014526 (0.083923) | 0.109954 / 0.176557 (-0.066602) | 0.176786 / 0.737135 (-0.560350) | 0.113477 / 0.296338 (-0.182862) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.431169 / 0.215209 (0.215960) | 4.303239 / 2.077655 (2.225585) | 2.088885 / 1.504120 (0.584765) | 1.895900 / 1.541195 (0.354706) | 1.997442 / 1.468490 (0.528952) | 0.541840 / 4.584777 (-4.042937) | 3.991982 / 3.745712 (0.246270) | 3.842421 / 5.269862 (-1.427440) | 2.281150 / 4.565676 (-2.284526) | 0.063851 / 0.424275 (-0.360425) | 0.008470 / 0.007607 (0.000863) | 0.515886 / 0.226044 (0.289841) | 5.202908 / 2.268929 (2.933980) | 2.662789 / 55.444624 (-52.781835) | 2.266731 / 6.876477 (-4.609746) | 2.343760 / 2.142072 (0.201688) | 0.641050 / 4.805227 (-4.164177) | 0.148236 / 6.500664 (-6.352428) | 0.067422 / 0.075469 (-0.008047) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.475729 / 1.841788 (-0.366059) | 22.401583 / 8.074308 (14.327274) | 15.886237 / 10.191392 (5.694845) | 0.171828 / 0.680424 (-0.508595) | 0.022161 / 0.534201 (-0.512040) | 0.465873 / 0.579283 (-0.113411) | 0.476386 / 0.434364 (0.042022) | 0.538317 / 0.540337 (-0.002020) | 0.754375 / 1.386936 (-0.632561) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007429 / 0.011353 (-0.003924) | 0.004592 / 0.011008 (-0.006416) | 0.072315 / 0.038508 (0.033807) | 0.080806 / 0.023109 (0.057697) | 0.444607 / 0.275898 (0.168709) | 0.476970 / 0.323480 (0.153490) | 0.006030 / 0.007986 (-0.001956) | 0.003755 / 0.004328 (-0.000573) | 0.074602 / 0.004250 (0.070352) | 0.061846 / 0.037052 (0.024794) | 0.450928 / 0.258489 (0.192439) | 0.493932 / 0.293841 (0.200091) | 0.037398 / 0.128546 (-0.091148) | 0.009807 / 0.075646 (-0.065840) | 0.080531 / 0.419271 (-0.338741) | 0.054052 / 0.043533 (0.010519) | 0.453034 / 0.255139 (0.197895) | 0.464959 / 0.283200 (0.181760) | 0.024718 / 0.141683 (-0.116965) | 1.687552 / 1.452155 (0.235397) | 1.765746 / 1.492716 (0.273029) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266998 / 0.018006 (0.248992) | 0.479832 / 0.000490 (0.479342) | 0.005429 / 0.000200 (0.005229) | 0.000117 / 0.000054 (0.000062) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038885 / 0.037411 (0.001474) | 0.105931 / 0.014526 (0.091405) | 0.120880 / 0.176557 (-0.055677) | 0.184006 / 0.737135 (-0.553130) | 0.120750 / 0.296338 (-0.175589) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478626 / 0.215209 (0.263417) | 4.797355 / 2.077655 (2.719700) | 2.582758 / 1.504120 (1.078638) | 2.396488 / 1.541195 (0.855293) | 2.515597 / 1.468490 (1.047107) | 0.544541 / 4.584777 (-4.040236) | 4.150702 / 3.745712 (0.404990) | 3.676837 / 5.269862 (-1.593024) | 2.287275 / 4.565676 (-2.278402) | 0.064602 / 0.424275 (-0.359673) | 0.008253 / 0.007607 (0.000646) | 0.576201 / 0.226044 (0.350157) | 5.859839 / 2.268929 (3.590910) | 3.248603 / 55.444624 (-52.196021) | 2.841959 / 6.876477 (-4.034518) | 2.991120 / 2.142072 (0.849047) | 0.667755 / 4.805227 (-4.137472) | 0.151219 / 6.500664 (-6.349445) | 0.068990 / 0.075469 (-0.006479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.572359 / 1.841788 (-0.269429) | 21.890279 / 8.074308 (13.815971) | 15.927473 / 10.191392 (5.736081) | 0.170388 / 0.680424 (-0.510036) | 0.023282 / 0.534201 (-0.510919) | 0.459371 / 0.579283 (-0.119912) | 0.468838 / 0.434364 (0.034475) | 0.546438 / 0.540337 (0.006101) | 0.746912 / 1.386936 (-0.640024) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8197ce872d2e24bd1ffbb07213faea25078f1386 \"CML watermark\")\n", "Yes, @lhoestq, you are right. I think we cross-send fixing PRs in a 15 minute interval... :sweat_smile: \r\n\r\nI would say the code in this PR is simpler and easier to understand, but feel free to ignore it.", "I think the correct way it to check if \"file\" in in the tuple if it's a tuple (in case someone adds another protocol name for the local filesystem)" ]
2023-10-23T09:29:17Z
2023-11-14T14:18:12Z
2023-11-14T14:17:40Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6335.diff", "html_url": "https://github.com/huggingface/datasets/pull/6335", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6335.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6335" }
Fix #6333.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6335/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6335/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6122
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6122/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6122/comments
https://api.github.com/repos/huggingface/datasets/issues/6122/events
https://github.com/huggingface/datasets/issues/6122
1,837,335,721
I_kwDODunzps5tg4Sp
6,122
Upload README via `push_to_hub`
{ "avatar_url": "https://avatars.githubusercontent.com/u/27999909?v=4", "events_url": "https://api.github.com/users/liyucheng09/events{/privacy}", "followers_url": "https://api.github.com/users/liyucheng09/followers", "following_url": "https://api.github.com/users/liyucheng09/following{/other_user}", "gists_url": "https://api.github.com/users/liyucheng09/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/liyucheng09", "id": 27999909, "login": "liyucheng09", "node_id": "MDQ6VXNlcjI3OTk5OTA5", "organizations_url": "https://api.github.com/users/liyucheng09/orgs", "received_events_url": "https://api.github.com/users/liyucheng09/received_events", "repos_url": "https://api.github.com/users/liyucheng09/repos", "site_admin": false, "starred_url": "https://api.github.com/users/liyucheng09/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/liyucheng09/subscriptions", "type": "User", "url": "https://api.github.com/users/liyucheng09" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "You can use `huggingface_hub`'s [Card API](https://huggingface.co/docs/huggingface_hub/package_reference/cards) to programmatically push a dataset card to the Hub." ]
2023-08-04T21:00:27Z
2023-08-21T18:18:54Z
2023-08-21T18:18:54Z
NONE
null
null
null
### Feature request `push_to_hub` now allows users to upload datasets programmatically. However, based on the latest doc, we still need to open the dataset page to add readme file manually. However, I do discover snippets to intialize a README for every `push_to_hub`: ``` dataset_card = ( DatasetCard( "---\n" + str(dataset_card_data) + "\n---\n" + f'# Dataset Card for "{repo_id.split("/")[-1]}"\n\n[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)' ) if dataset_card is None else dataset_card ) HfApi(endpoint=config.HF_ENDPOINT).upload_file( path_or_fileobj=str(dataset_card).encode(), path_in_repo="README.md", repo_id=repo_id, token=token, repo_type="dataset", revision=branch, ) ``` So, if we can enable `push_to_hub` to upload a readme file by ourselves instead of using the auto generated ones, it can save ton of time, and will definitely alleviate the current "lack-of-dataset-card" situation. ### Motivation as elabrated above. ### Your contribution I might be able to make a pr.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6122/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6122/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/1922
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1922/comments
https://api.github.com/repos/huggingface/datasets/issues/1922/events
https://github.com/huggingface/datasets/issues/1922
813,140,806
MDU6SXNzdWU4MTMxNDA4MDY=
1,922
How to update the "wino_bias" dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4", "events_url": "https://api.github.com/users/JieyuZhao/events{/privacy}", "followers_url": "https://api.github.com/users/JieyuZhao/followers", "following_url": "https://api.github.com/users/JieyuZhao/following{/other_user}", "gists_url": "https://api.github.com/users/JieyuZhao/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/JieyuZhao", "id": 22306304, "login": "JieyuZhao", "node_id": "MDQ6VXNlcjIyMzA2MzA0", "organizations_url": "https://api.github.com/users/JieyuZhao/orgs", "received_events_url": "https://api.github.com/users/JieyuZhao/received_events", "repos_url": "https://api.github.com/users/JieyuZhao/repos", "site_admin": false, "starred_url": "https://api.github.com/users/JieyuZhao/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/JieyuZhao/subscriptions", "type": "User", "url": "https://api.github.com/users/JieyuZhao" }
[]
open
false
null
[]
null
[ "Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias\r\nAlso the homepage url is also mentioned in the wino_bias.py so feel free to update it there as well.\r\n\r\nYou can create a Pull Request directly from the github interface by editing the files you want and submit a PR, or from a local clone of the repository.\r\n\r\nThanks for noticing !" ]
2021-02-22T05:39:39Z
2021-02-22T10:35:59Z
null
CONTRIBUTOR
null
null
null
Hi all, Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that? Thanks!
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1922/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1922/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/5663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5663/comments
https://api.github.com/repos/huggingface/datasets/issues/5663/events
https://github.com/huggingface/datasets/issues/5663
1,637,173,248
I_kwDODunzps5hlUgA
5,663
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[]
2023-03-23T09:39:43Z
2023-03-23T10:09:55Z
2023-03-23T10:09:55Z
MEMBER
null
null
null
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662 ``` FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_on_disk - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_audio - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_device - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_image - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/test_formatting.py::FormatterTest::test_jax_formatter_jnp_array_kwargs - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. FAILED tests/features/test_features.py::CastToPythonObjectsTest::test_cast_to_python_objects_jax - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installation instructions. ===== 8 failed, 2147 passed, 10 skipped, 37 warnings in 228.69s (0:03:48) ====== ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5663/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5663/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5393
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5393/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5393/comments
https://api.github.com/repos/huggingface/datasets/issues/5393/events
https://github.com/huggingface/datasets/pull/5393
1,512,908,613
PR_kwDODunzps5GTg0a
5,393
Finish deprecating the fs argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/15098095?v=4", "events_url": "https://api.github.com/users/dconathan/events{/privacy}", "followers_url": "https://api.github.com/users/dconathan/followers", "following_url": "https://api.github.com/users/dconathan/following{/other_user}", "gists_url": "https://api.github.com/users/dconathan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dconathan", "id": 15098095, "login": "dconathan", "node_id": "MDQ6VXNlcjE1MDk4MDk1", "organizations_url": "https://api.github.com/users/dconathan/orgs", "received_events_url": "https://api.github.com/users/dconathan/received_events", "repos_url": "https://api.github.com/users/dconathan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dconathan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dconathan/subscriptions", "type": "User", "url": "https://api.github.com/users/dconathan" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Thanks for the deprecation. Some minor suggested fixes below...\r\n> \r\n> Also note that the corresponding tests should be updated as well.\r\n\r\nThanks for the suggestions/typo fixes. I updated the failing test - passing locally now", "Nice thanks !\r\n\r\nI believe you also need to update `_load_info` and `_save_info` in `builder.py` - they're still passing `fs=self._fs` instead of `storage_options=self._fs.storage_options`\r\n\r\nThis should remove the remaining warnings in the CI such as \r\n\r\n```python\r\ntests/test_builder.py::test_builder_with_filesystem_download_and_prepare_reload\r\ntests/test_load.py::test_load_dataset_local[False]\r\ntests/test_load.py::test_load_dataset_local[True]\r\ntests/test_load.py::test_load_dataset_zip_csv[csv_path-False]\r\ntests/test_load.py::test_load_dataset_then_move_then_reload\r\n /opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/info.py:344: FutureWarning: 'fs' was deprecated in favor of 'storage_options' in version 2.9.0 and will be removed in 3.0.0.\r\n You can remove this warning by passing 'storage_options=fs.storage_options' instead.\r\n```", "re: docstring, I assume passing in `storage_options=s3.storage_options` is correct/necessary to pass the secrets?", "what about \r\nhttps://github.com/huggingface/datasets/blob/5b793dd8c43bf6e85f165238becb3c64f6cd3ed0/src/datasets/filesystems/__init__.py#L43-L54\r\nleave as is? Is this function no longer necessary?", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008877 / 0.011353 (-0.002475) | 0.004725 / 0.011008 (-0.006283) | 0.100738 / 0.038508 (0.062230) | 0.030251 / 0.023109 (0.007141) | 0.301483 / 0.275898 (0.025585) | 0.374161 / 0.323480 (0.050681) | 0.007225 / 0.007986 (-0.000761) | 0.003654 / 0.004328 (-0.000674) | 0.078400 / 0.004250 (0.074149) | 0.035786 / 0.037052 (-0.001267) | 0.309744 / 0.258489 (0.051255) | 0.355834 / 0.293841 (0.061994) | 0.034344 / 0.128546 (-0.094202) | 0.011584 / 0.075646 (-0.064062) | 0.321462 / 0.419271 (-0.097810) | 0.041201 / 0.043533 (-0.002332) | 0.298808 / 0.255139 (0.043669) | 0.332626 / 0.283200 (0.049426) | 0.089131 / 0.141683 (-0.052552) | 1.477888 / 1.452155 (0.025734) | 1.530365 / 1.492716 (0.037649) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.191647 / 0.018006 (0.173640) | 0.424339 / 0.000490 (0.423849) | 0.002941 / 0.000200 (0.002741) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023442 / 0.037411 (-0.013969) | 0.097264 / 0.014526 (0.082738) | 0.105655 / 0.176557 (-0.070901) | 0.145055 / 0.737135 (-0.592081) | 0.108750 / 0.296338 (-0.187588) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.422925 / 0.215209 (0.207716) | 4.216022 / 2.077655 (2.138367) | 1.876441 / 1.504120 (0.372322) | 1.665115 / 1.541195 (0.123920) | 1.711105 / 1.468490 (0.242615) | 0.701820 / 4.584777 (-3.882957) | 3.389319 / 3.745712 (-0.356393) | 1.909868 / 5.269862 (-3.359994) | 1.270482 / 4.565676 (-3.295195) | 0.083680 / 0.424275 (-0.340595) | 0.012347 / 0.007607 (0.004740) | 0.531076 / 0.226044 (0.305031) | 5.344045 / 2.268929 (3.075117) | 2.310897 / 55.444624 (-53.133728) | 1.971953 / 6.876477 (-4.904524) | 2.113748 / 2.142072 (-0.028325) | 0.823766 / 4.805227 (-3.981462) | 0.150864 / 6.500664 (-6.349800) | 0.066263 / 0.075469 (-0.009206) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.253190 / 1.841788 (-0.588598) | 13.757887 / 8.074308 (5.683579) | 13.888195 / 10.191392 (3.696803) | 0.137285 / 0.680424 (-0.543139) | 0.029151 / 0.534201 (-0.505050) | 0.387402 / 0.579283 (-0.191881) | 0.401673 / 0.434364 (-0.032691) | 0.450474 / 0.540337 (-0.089863) | 0.533757 / 1.386936 (-0.853179) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006919 / 0.011353 (-0.004434) | 0.004655 / 0.011008 (-0.006353) | 0.096946 / 0.038508 (0.058438) | 0.028697 / 0.023109 (0.005588) | 0.420020 / 0.275898 (0.144122) | 0.460193 / 0.323480 (0.136713) | 0.005189 / 0.007986 (-0.002796) | 0.003425 / 0.004328 (-0.000904) | 0.074900 / 0.004250 (0.070649) | 0.041844 / 0.037052 (0.004792) | 0.421538 / 0.258489 (0.163049) | 0.468497 / 0.293841 (0.174656) | 0.032573 / 0.128546 (-0.095973) | 0.011731 / 0.075646 (-0.063916) | 0.320221 / 0.419271 (-0.099050) | 0.042113 / 0.043533 (-0.001420) | 0.422757 / 0.255139 (0.167618) | 0.445372 / 0.283200 (0.162172) | 0.090300 / 0.141683 (-0.051383) | 1.458598 / 1.452155 (0.006443) | 1.550060 / 1.492716 (0.057344) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235489 / 0.018006 (0.217483) | 0.418207 / 0.000490 (0.417718) | 0.002511 / 0.000200 (0.002311) | 0.000080 / 0.000054 (0.000025) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025603 / 0.037411 (-0.011808) | 0.100237 / 0.014526 (0.085711) | 0.108617 / 0.176557 (-0.067939) | 0.148417 / 0.737135 (-0.588719) | 0.110163 / 0.296338 (-0.186176) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.474804 / 0.215209 (0.259595) | 4.745370 / 2.077655 (2.667715) | 2.417819 / 1.504120 (0.913699) | 2.209892 / 1.541195 (0.668697) | 2.263296 / 1.468490 (0.794806) | 0.695537 / 4.584777 (-3.889240) | 3.381028 / 3.745712 (-0.364684) | 2.952271 / 5.269862 (-2.317591) | 1.507041 / 4.565676 (-3.058636) | 0.083334 / 0.424275 (-0.340941) | 0.012554 / 0.007607 (0.004947) | 0.578861 / 0.226044 (0.352817) | 5.795241 / 2.268929 (3.526313) | 2.858544 / 55.444624 (-52.586080) | 2.516270 / 6.876477 (-4.360207) | 2.557350 / 2.142072 (0.415278) | 0.801799 / 4.805227 (-4.003428) | 0.151579 / 6.500664 (-6.349085) | 0.068765 / 0.075469 (-0.006704) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279935 / 1.841788 (-0.561853) | 14.049065 / 8.074308 (5.974757) | 13.972703 / 10.191392 (3.781311) | 0.140551 / 0.680424 (-0.539873) | 0.016831 / 0.534201 (-0.517370) | 0.383886 / 0.579283 (-0.195397) | 0.385661 / 0.434364 (-0.048703) | 0.444525 / 0.540337 (-0.095813) | 0.532197 / 1.386936 (-0.854739) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8d206848fb7afeafecf2a2581ca9a332bdedefa9 \"CML watermark\")\n" ]
2022-12-28T15:33:17Z
2023-01-18T12:42:33Z
2023-01-18T12:35:32Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5393.diff", "html_url": "https://github.com/huggingface/datasets/pull/5393", "merged_at": "2023-01-18T12:35:32Z", "patch_url": "https://github.com/huggingface/datasets/pull/5393.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5393" }
See #5385 for some discussion on this The `fs=` arg was depcrecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in `2.8.0` (to be removed in `3.0.0`). There are a few other places where the `fs=` arg was still used (functions/methods in `datasets.info` and `datasets.load`). This PR adds a similar behavior, warnings and the `storage_options=` arg to these functions and methods. One question: should the "deprecated" / "added" versions be `2.8.1` for the docs/warnings on these? Right now I'm going with "fs was deprecated in 2.8.0" but "storage_options= was added in 2.8.1" where appropriate. @mariosasko
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 2, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/5393/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5393/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3950
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3950/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3950/comments
https://api.github.com/repos/huggingface/datasets/issues/3950/events
https://github.com/huggingface/datasets/issues/3950
1,171,560,585
I_kwDODunzps5F1JiJ
3,950
Streaming Datasets don't work with Transformers Trainer when dataloader_num_workers>1
{ "avatar_url": "https://avatars.githubusercontent.com/u/9633?v=4", "events_url": "https://api.github.com/users/dlwh/events{/privacy}", "followers_url": "https://api.github.com/users/dlwh/followers", "following_url": "https://api.github.com/users/dlwh/following{/other_user}", "gists_url": "https://api.github.com/users/dlwh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dlwh", "id": 9633, "login": "dlwh", "node_id": "MDQ6VXNlcjk2MzM=", "organizations_url": "https://api.github.com/users/dlwh/orgs", "received_events_url": "https://api.github.com/users/dlwh/received_events", "repos_url": "https://api.github.com/users/dlwh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dlwh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dlwh/subscriptions", "type": "User", "url": "https://api.github.com/users/dlwh" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Hi, thanks for reporting. This could be related to https://github.com/huggingface/datasets/issues/3148 too\r\n\r\nWe should definitely make `TorchIterableDataset` picklable by moving it in the main code instead of inside a function. If you'd like to contribute, feel free to open a Pull Request :)\r\n\r\nI'm also taking a look at your second issue, which is more technical" ]
2022-03-16T21:14:11Z
2022-06-10T20:47:26Z
2022-06-10T20:47:26Z
NONE
null
null
null
## Describe the bug Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash. ## Steps to reproduce the bug ```python import transformers from transformers import Trainer, AutoModelForCausalLM, TrainingArguments import datasets ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("torch") model = AutoModelForCausalLM.from_pretrained("distilgpt2") Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() ``` ## Expected results For this code I'd expect a crash related to not having preprocessed the data, but instead we get a pickling error. ## Actual results ``` 0%| | 0/1000 [00:00<?, ?it/s]Traceback (most recent call last): File "/Users/dlwh/src/mistral/src/stream_fork_crash.py", line 7, in <module> Trainer(model, train_dataset=ds, args=TrainingArguments("out", max_steps=1000, dataloader_num_workers=4)).train() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/transformers/trainer.py", line 1339, in train for step, inputs in enumerate(epoch_iterator): File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 359, in __iter__ return self._get_iterator() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 305, in _get_iterator return _MultiProcessingDataLoaderIter(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 918, in __init__ w.start() File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/process.py", line 121, in start self._popen = self._Popen(self) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/context.py", line 284, in _Popen return Popen(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__ super().__init__(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__ self._launch(process_obj) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch reduction.dump(process_obj, fp) File "/Users/dlwh/.conda/envs/mistral/lib/python3.8/multiprocessing/reduction.py", line 60, in dump ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'iterable_dataset.<locals>.TorchIterableDataset' 0%| | 0/1000 [00:00<?, ?it/s] ``` This immediate crash can be fixed by not using a local class to make the `TorchIterableDataset` (Note that you have to do with_format("torch") or you get an exception because the dataset has no len) However, any lambdas etc used as maps will also trigger this crash. A more permanent fix would be to move away from multiprocessing and instead use something like pathos or multiprocessing_on_dill (https://stackoverflow.com/questions/19984152/what-can-multiprocessing-and-dill-do-together) Note that if you bypass this crash you get another crash. (I'll file a separate bug). ## Environment info - `datasets` version: 2.0.0 - Platform: macOS-12.2-arm64-arm-64bit - Python version: 3.8.12 - PyArrow version: 7.0.0 - Pandas version: 1.4.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3950/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3950/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1033
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1033/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1033/comments
https://api.github.com/repos/huggingface/datasets/issues/1033/events
https://github.com/huggingface/datasets/pull/1033
755,921,927
MDExOlB1bGxSZXF1ZXN0NTMxNTUxNzYw
1,033
Add support for ".txm" format
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "Neat! Looks like you need a rebase and then should be good to go :) ", "Done, @yjernite, @lhoestq.", "If you agree, we could merge this.", "Hi ! yes sure :) can you just merge master into this branch before we merge ?", "Done @lhoestq " ]
2020-12-03T06:52:08Z
2021-02-21T19:47:11Z
2021-02-21T19:47:11Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1033.diff", "html_url": "https://github.com/huggingface/datasets/pull/1033", "merged_at": "2021-02-21T19:47:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/1033.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1033" }
In dummy data generation, add support for XML-like ".txm" file format. Also support filenames with additional compression extension: ".txm.gz".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1033/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1033/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5283
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5283/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5283/comments
https://api.github.com/repos/huggingface/datasets/issues/5283/events
https://github.com/huggingface/datasets/pull/5283
1,460,291,003
PR_kwDODunzps5De5M1
5,283
Release: 2.6.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-22T17:36:24Z
2022-11-22T17:50:12Z
2022-11-22T17:47:02Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5283.diff", "html_url": "https://github.com/huggingface/datasets/pull/5283", "merged_at": "2022-11-22T17:47:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/5283.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5283" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5283/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5283/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5581/comments
https://api.github.com/repos/huggingface/datasets/issues/5581/events
https://github.com/huggingface/datasets/issues/5581
1,600,675,489
I_kwDODunzps5faF6h
5,581
[DOC] Mistaken docs on set_format
{ "avatar_url": "https://avatars.githubusercontent.com/u/36224762?v=4", "events_url": "https://api.github.com/users/NightMachinery/events{/privacy}", "followers_url": "https://api.github.com/users/NightMachinery/followers", "following_url": "https://api.github.com/users/NightMachinery/following{/other_user}", "gists_url": "https://api.github.com/users/NightMachinery/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NightMachinery", "id": 36224762, "login": "NightMachinery", "node_id": "MDQ6VXNlcjM2MjI0NzYy", "organizations_url": "https://api.github.com/users/NightMachinery/orgs", "received_events_url": "https://api.github.com/users/NightMachinery/received_events", "repos_url": "https://api.github.com/users/NightMachinery/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NightMachinery/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NightMachinery/subscriptions", "type": "User", "url": "https://api.github.com/users/NightMachinery" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
[ "Thanks for reporting!" ]
2023-02-27T08:03:09Z
2023-02-28T19:19:17Z
2023-02-28T19:19:17Z
CONTRIBUTOR
null
null
null
### Describe the bug https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format <img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png"> While actually running it will result in: <img width="1094" alt="image" src="https://user-images.githubusercontent.com/36224762/221507032-007dab82-8781-4319-b21a-e6e4d40d97b3.png"> ### Steps to reproduce the bug _ ### Expected behavior _ ### Environment info - `datasets` version: 2.10.0 - Platform: Linux-5.10.147+-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 9.0.0 - Pandas version: 1.3.5
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5581/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5581/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6028
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6028/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6028/comments
https://api.github.com/repos/huggingface/datasets/issues/6028/events
https://github.com/huggingface/datasets/pull/6028
1,803,294,981
PR_kwDODunzps5Vb3LJ
6,028
Use new hffs
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006665 / 0.011353 (-0.004688) | 0.004376 / 0.011008 (-0.006633) | 0.085529 / 0.038508 (0.047021) | 0.076372 / 0.023109 (0.053263) | 0.310019 / 0.275898 (0.034121) | 0.341404 / 0.323480 (0.017924) | 0.005666 / 0.007986 (-0.002320) | 0.003763 / 0.004328 (-0.000566) | 0.064678 / 0.004250 (0.060427) | 0.059283 / 0.037052 (0.022231) | 0.316194 / 0.258489 (0.057704) | 0.349397 / 0.293841 (0.055557) | 0.031199 / 0.128546 (-0.097347) | 0.008724 / 0.075646 (-0.066923) | 0.300236 / 0.419271 (-0.119035) | 0.068872 / 0.043533 (0.025339) | 0.308521 / 0.255139 (0.053382) | 0.331292 / 0.283200 (0.048092) | 0.028236 / 0.141683 (-0.113447) | 1.501365 / 1.452155 (0.049211) | 1.554334 / 1.492716 (0.061618) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.238291 / 0.018006 (0.220285) | 0.565069 / 0.000490 (0.564580) | 0.001626 / 0.000200 (0.001426) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.029777 / 0.037411 (-0.007634) | 0.082873 / 0.014526 (0.068347) | 0.099619 / 0.176557 (-0.076937) | 0.156572 / 0.737135 (-0.580563) | 0.099887 / 0.296338 (-0.196452) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.401017 / 0.215209 (0.185808) | 3.827192 / 2.077655 (1.749537) | 1.861554 / 1.504120 (0.357434) | 1.699869 / 1.541195 (0.158674) | 1.720043 / 1.468490 (0.251553) | 0.486757 / 4.584777 (-4.098020) | 3.638125 / 3.745712 (-0.107587) | 5.844959 / 5.269862 (0.575097) | 3.454901 / 4.565676 (-1.110775) | 0.057650 / 0.424275 (-0.366625) | 0.007341 / 0.007607 (-0.000266) | 0.462698 / 0.226044 (0.236654) | 4.633472 / 2.268929 (2.364544) | 2.287607 / 55.444624 (-53.157017) | 2.057318 / 6.876477 (-4.819159) | 2.203657 / 2.142072 (0.061584) | 0.598136 / 4.805227 (-4.207091) | 0.134012 / 6.500664 (-6.366653) | 0.060824 / 0.075469 (-0.014645) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.277752 / 1.841788 (-0.564036) | 20.013398 / 8.074308 (11.939089) | 14.372993 / 10.191392 (4.181601) | 0.169991 / 0.680424 (-0.510433) | 0.018344 / 0.534201 (-0.515857) | 0.396985 / 0.579283 (-0.182299) | 0.416289 / 0.434364 (-0.018075) | 0.458658 / 0.540337 (-0.081680) | 0.692980 / 1.386936 (-0.693956) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006689 / 0.011353 (-0.004664) | 0.004393 / 0.011008 (-0.006615) | 0.064069 / 0.038508 (0.025561) | 0.080717 / 0.023109 (0.057607) | 0.370090 / 0.275898 (0.094191) | 0.400432 / 0.323480 (0.076952) | 0.005613 / 0.007986 (-0.002372) | 0.003641 / 0.004328 (-0.000687) | 0.064771 / 0.004250 (0.060520) | 0.057555 / 0.037052 (0.020502) | 0.392156 / 0.258489 (0.133667) | 0.409842 / 0.293841 (0.116001) | 0.031500 / 0.128546 (-0.097047) | 0.008786 / 0.075646 (-0.066860) | 0.070342 / 0.419271 (-0.348929) | 0.048646 / 0.043533 (0.005113) | 0.360914 / 0.255139 (0.105775) | 0.387626 / 0.283200 (0.104426) | 0.022787 / 0.141683 (-0.118896) | 1.508915 / 1.452155 (0.056761) | 1.539719 / 1.492716 (0.047002) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257985 / 0.018006 (0.239979) | 0.550990 / 0.000490 (0.550501) | 0.000407 / 0.000200 (0.000207) | 0.000057 / 0.000054 (0.000003) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030183 / 0.037411 (-0.007228) | 0.086882 / 0.014526 (0.072356) | 0.102382 / 0.176557 (-0.074175) | 0.154745 / 0.737135 (-0.582390) | 0.104008 / 0.296338 (-0.192331) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426284 / 0.215209 (0.211075) | 4.240812 / 2.077655 (2.163158) | 2.261240 / 1.504120 (0.757120) | 2.085905 / 1.541195 (0.544710) | 2.160374 / 1.468490 (0.691883) | 0.481126 / 4.584777 (-4.103651) | 3.516234 / 3.745712 (-0.229478) | 3.325322 / 5.269862 (-1.944539) | 2.043307 / 4.565676 (-2.522369) | 0.056663 / 0.424275 (-0.367612) | 0.007786 / 0.007607 (0.000179) | 0.497614 / 0.226044 (0.271570) | 4.974529 / 2.268929 (2.705600) | 2.700018 / 55.444624 (-52.744606) | 2.393778 / 6.876477 (-4.482699) | 2.628202 / 2.142072 (0.486130) | 0.594316 / 4.805227 (-4.210911) | 0.147092 / 6.500664 (-6.353572) | 0.062207 / 0.075469 (-0.013262) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.315676 / 1.841788 (-0.526112) | 20.749251 / 8.074308 (12.674943) | 14.371553 / 10.191392 (4.180160) | 0.170249 / 0.680424 (-0.510175) | 0.018478 / 0.534201 (-0.515722) | 0.395710 / 0.579283 (-0.183573) | 0.409706 / 0.434364 (-0.024658) | 0.463454 / 0.540337 (-0.076884) | 0.615657 / 1.386936 (-0.771279) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#c5a752d8e8ca0a6ed118b024ba03c1b4a2881177 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007224 / 0.011353 (-0.004129) | 0.004506 / 0.011008 (-0.006503) | 0.096729 / 0.038508 (0.058221) | 0.082394 / 0.023109 (0.059284) | 0.390954 / 0.275898 (0.115056) | 0.416647 / 0.323480 (0.093167) | 0.005894 / 0.007986 (-0.002092) | 0.003756 / 0.004328 (-0.000572) | 0.075800 / 0.004250 (0.071549) | 0.062683 / 0.037052 (0.025631) | 0.398959 / 0.258489 (0.140470) | 0.436624 / 0.293841 (0.142783) | 0.034650 / 0.128546 (-0.093896) | 0.009655 / 0.075646 (-0.065991) | 0.315761 / 0.419271 (-0.103511) | 0.060957 / 0.043533 (0.017424) | 0.385649 / 0.255139 (0.130510) | 0.394022 / 0.283200 (0.110822) | 0.024601 / 0.141683 (-0.117082) | 1.729586 / 1.452155 (0.277431) | 1.724153 / 1.492716 (0.231437) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.207070 / 0.018006 (0.189063) | 0.466502 / 0.000490 (0.466012) | 0.010739 / 0.000200 (0.010540) | 0.000214 / 0.000054 (0.000160) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031633 / 0.037411 (-0.005779) | 0.095345 / 0.014526 (0.080819) | 0.105399 / 0.176557 (-0.071157) | 0.174173 / 0.737135 (-0.562962) | 0.104207 / 0.296338 (-0.192132) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.435312 / 0.215209 (0.220103) | 4.265600 / 2.077655 (2.187946) | 2.056500 / 1.504120 (0.552380) | 1.848023 / 1.541195 (0.306828) | 1.946156 / 1.468490 (0.477666) | 0.557788 / 4.584777 (-4.026989) | 4.070289 / 3.745712 (0.324577) | 3.608027 / 5.269862 (-1.661835) | 2.214556 / 4.565676 (-2.351121) | 0.062623 / 0.424275 (-0.361652) | 0.008083 / 0.007607 (0.000476) | 0.491782 / 0.226044 (0.265738) | 4.989963 / 2.268929 (2.721035) | 2.575867 / 55.444624 (-52.868757) | 2.208045 / 6.876477 (-4.668431) | 2.364184 / 2.142072 (0.222112) | 0.633925 / 4.805227 (-4.171302) | 0.144323 / 6.500664 (-6.356341) | 0.067505 / 0.075469 (-0.007965) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.467219 / 1.841788 (-0.374569) | 22.334967 / 8.074308 (14.260659) | 15.715747 / 10.191392 (5.524355) | 0.175443 / 0.680424 (-0.504980) | 0.026165 / 0.534201 (-0.508036) | 0.490675 / 0.579283 (-0.088608) | 0.509211 / 0.434364 (0.074847) | 0.586303 / 0.540337 (0.045965) | 0.785052 / 1.386936 (-0.601884) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007893 / 0.011353 (-0.003460) | 0.004577 / 0.011008 (-0.006431) | 0.075781 / 0.038508 (0.037273) | 0.095492 / 0.023109 (0.072382) | 0.433259 / 0.275898 (0.157361) | 0.469386 / 0.323480 (0.145906) | 0.006317 / 0.007986 (-0.001669) | 0.003708 / 0.004328 (-0.000621) | 0.074417 / 0.004250 (0.070167) | 0.068605 / 0.037052 (0.031552) | 0.448701 / 0.258489 (0.190212) | 0.469131 / 0.293841 (0.175290) | 0.036647 / 0.128546 (-0.091899) | 0.010077 / 0.075646 (-0.065570) | 0.082457 / 0.419271 (-0.336815) | 0.063255 / 0.043533 (0.019722) | 0.428144 / 0.255139 (0.173005) | 0.451872 / 0.283200 (0.168672) | 0.033953 / 0.141683 (-0.107730) | 1.781752 / 1.452155 (0.329597) | 1.869014 / 1.492716 (0.376297) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223596 / 0.018006 (0.205590) | 0.470307 / 0.000490 (0.469818) | 0.005059 / 0.000200 (0.004859) | 0.000104 / 0.000054 (0.000049) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.038804 / 0.037411 (0.001393) | 0.117879 / 0.014526 (0.103353) | 0.140701 / 0.176557 (-0.035855) | 0.194672 / 0.737135 (-0.542463) | 0.132806 / 0.296338 (-0.163533) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.510109 / 0.215209 (0.294900) | 4.729457 / 2.077655 (2.651803) | 2.512113 / 1.504120 (1.007993) | 2.302553 / 1.541195 (0.761358) | 2.420462 / 1.468490 (0.951972) | 0.531682 / 4.584777 (-4.053095) | 4.061208 / 3.745712 (0.315496) | 3.588542 / 5.269862 (-1.681320) | 2.203187 / 4.565676 (-2.362489) | 0.065791 / 0.424275 (-0.358484) | 0.008839 / 0.007607 (0.001232) | 0.562041 / 0.226044 (0.335997) | 5.702340 / 2.268929 (3.433412) | 3.127609 / 55.444624 (-52.317015) | 2.823060 / 6.876477 (-4.053417) | 2.898675 / 2.142072 (0.756603) | 0.659589 / 4.805227 (-4.145638) | 0.148798 / 6.500664 (-6.351866) | 0.070787 / 0.075469 (-0.004682) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.478317 / 1.841788 (-0.363471) | 21.995400 / 8.074308 (13.921092) | 16.770729 / 10.191392 (6.579337) | 0.226333 / 0.680424 (-0.454091) | 0.021835 / 0.534201 (-0.512366) | 0.460373 / 0.579283 (-0.118910) | 0.479494 / 0.434364 (0.045130) | 0.529470 / 0.540337 (-0.010868) | 0.718066 / 1.386936 (-0.668870) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9a717b8eb80b0e50b25818127f79a35e0866fb14 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007824 / 0.011353 (-0.003529) | 0.004601 / 0.011008 (-0.006407) | 0.100025 / 0.038508 (0.061517) | 0.096046 / 0.023109 (0.072936) | 0.376226 / 0.275898 (0.100328) | 0.410905 / 0.323480 (0.087425) | 0.006048 / 0.007986 (-0.001938) | 0.003817 / 0.004328 (-0.000511) | 0.076624 / 0.004250 (0.072374) | 0.066390 / 0.037052 (0.029338) | 0.380098 / 0.258489 (0.121609) | 0.413603 / 0.293841 (0.119762) | 0.036546 / 0.128546 (-0.092001) | 0.009881 / 0.075646 (-0.065765) | 0.344338 / 0.419271 (-0.074934) | 0.061882 / 0.043533 (0.018350) | 0.368568 / 0.255139 (0.113429) | 0.397133 / 0.283200 (0.113934) | 0.027255 / 0.141683 (-0.114428) | 1.795099 / 1.452155 (0.342945) | 1.852443 / 1.492716 (0.359727) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.247436 / 0.018006 (0.229430) | 0.494119 / 0.000490 (0.493629) | 0.004359 / 0.000200 (0.004159) | 0.000089 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034765 / 0.037411 (-0.002647) | 0.104541 / 0.014526 (0.090015) | 0.113898 / 0.176557 (-0.062659) | 0.183634 / 0.737135 (-0.553501) | 0.116423 / 0.296338 (-0.179916) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458747 / 0.215209 (0.243538) | 4.555740 / 2.077655 (2.478085) | 2.217240 / 1.504120 (0.713121) | 2.039879 / 1.541195 (0.498684) | 2.088581 / 1.468490 (0.620091) | 0.588063 / 4.584777 (-3.996714) | 4.238226 / 3.745712 (0.492514) | 4.768060 / 5.269862 (-0.501802) | 2.857117 / 4.565676 (-1.708560) | 0.068742 / 0.424275 (-0.355533) | 0.008667 / 0.007607 (0.001059) | 0.549294 / 0.226044 (0.323249) | 5.464635 / 2.268929 (3.195706) | 2.744435 / 55.444624 (-52.700189) | 2.347660 / 6.876477 (-4.528816) | 2.616816 / 2.142072 (0.474743) | 0.703701 / 4.805227 (-4.101526) | 0.159749 / 6.500664 (-6.340915) | 0.071990 / 0.075469 (-0.003479) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.486599 / 1.841788 (-0.355188) | 22.745438 / 8.074308 (14.671130) | 16.822332 / 10.191392 (6.630940) | 0.184730 / 0.680424 (-0.495694) | 0.021267 / 0.534201 (-0.512934) | 0.467108 / 0.579283 (-0.112176) | 0.472674 / 0.434364 (0.038311) | 0.548094 / 0.540337 (0.007756) | 0.735885 / 1.386936 (-0.651051) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007746 / 0.011353 (-0.003607) | 0.004585 / 0.011008 (-0.006423) | 0.076943 / 0.038508 (0.038435) | 0.087473 / 0.023109 (0.064363) | 0.480099 / 0.275898 (0.204201) | 0.495271 / 0.323480 (0.171791) | 0.006348 / 0.007986 (-0.001638) | 0.003902 / 0.004328 (-0.000426) | 0.077586 / 0.004250 (0.073335) | 0.066467 / 0.037052 (0.029415) | 0.468741 / 0.258489 (0.210252) | 0.506778 / 0.293841 (0.212937) | 0.036877 / 0.128546 (-0.091669) | 0.010102 / 0.075646 (-0.065545) | 0.084419 / 0.419271 (-0.334852) | 0.058721 / 0.043533 (0.015188) | 0.453633 / 0.255139 (0.198494) | 0.481171 / 0.283200 (0.197971) | 0.028716 / 0.141683 (-0.112967) | 1.853048 / 1.452155 (0.400893) | 1.885847 / 1.492716 (0.393130) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192136 / 0.018006 (0.174130) | 0.484481 / 0.000490 (0.483991) | 0.002951 / 0.000200 (0.002751) | 0.000098 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037949 / 0.037411 (0.000538) | 0.108364 / 0.014526 (0.093838) | 0.119542 / 0.176557 (-0.057014) | 0.188542 / 0.737135 (-0.548593) | 0.122011 / 0.296338 (-0.174327) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483135 / 0.215209 (0.267926) | 4.849715 / 2.077655 (2.772060) | 2.497736 / 1.504120 (0.993616) | 2.314243 / 1.541195 (0.773048) | 2.412739 / 1.468490 (0.944249) | 0.564137 / 4.584777 (-4.020639) | 4.242273 / 3.745712 (0.496561) | 6.337843 / 5.269862 (1.067982) | 3.923250 / 4.565676 (-0.642426) | 0.066464 / 0.424275 (-0.357811) | 0.009217 / 0.007607 (0.001610) | 0.575667 / 0.226044 (0.349623) | 5.746187 / 2.268929 (3.477258) | 3.069655 / 55.444624 (-52.374969) | 2.674798 / 6.876477 (-4.201679) | 2.956535 / 2.142072 (0.814463) | 0.701043 / 4.805227 (-4.104185) | 0.157241 / 6.500664 (-6.343423) | 0.073175 / 0.075469 (-0.002294) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.609943 / 1.841788 (-0.231844) | 23.478594 / 8.074308 (15.404286) | 17.454437 / 10.191392 (7.263045) | 0.186422 / 0.680424 (-0.494002) | 0.021703 / 0.534201 (-0.512498) | 0.471704 / 0.579283 (-0.107579) | 0.480553 / 0.434364 (0.046189) | 0.552881 / 0.540337 (0.012544) | 0.722515 / 1.386936 (-0.664421) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#84645f80049cd00d9e0d4908faf3c3203fdcf21d \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007542 / 0.011353 (-0.003811) | 0.004692 / 0.011008 (-0.006316) | 0.099155 / 0.038508 (0.060647) | 0.089365 / 0.023109 (0.066256) | 0.370870 / 0.275898 (0.094972) | 0.422152 / 0.323480 (0.098673) | 0.006223 / 0.007986 (-0.001763) | 0.003852 / 0.004328 (-0.000476) | 0.075438 / 0.004250 (0.071188) | 0.065973 / 0.037052 (0.028921) | 0.381513 / 0.258489 (0.123024) | 0.416196 / 0.293841 (0.122355) | 0.035483 / 0.128546 (-0.093063) | 0.009884 / 0.075646 (-0.065762) | 0.341290 / 0.419271 (-0.077982) | 0.060546 / 0.043533 (0.017014) | 0.365101 / 0.255139 (0.109962) | 0.391058 / 0.283200 (0.107859) | 0.026325 / 0.141683 (-0.115358) | 1.815168 / 1.452155 (0.363013) | 1.834711 / 1.492716 (0.341994) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.222177 / 0.018006 (0.204171) | 0.501151 / 0.000490 (0.500662) | 0.010202 / 0.000200 (0.010002) | 0.000102 / 0.000054 (0.000048) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.034043 / 0.037411 (-0.003368) | 0.097884 / 0.014526 (0.083358) | 0.114022 / 0.176557 (-0.062534) | 0.186200 / 0.737135 (-0.550935) | 0.115555 / 0.296338 (-0.180783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.485857 / 0.215209 (0.270648) | 4.959263 / 2.077655 (2.881608) | 2.501085 / 1.504120 (0.996965) | 2.234660 / 1.541195 (0.693465) | 2.238585 / 1.468490 (0.770095) | 0.645431 / 4.584777 (-3.939345) | 4.434311 / 3.745712 (0.688599) | 4.771491 / 5.269862 (-0.498371) | 2.778963 / 4.565676 (-1.786714) | 0.075615 / 0.424275 (-0.348660) | 0.009502 / 0.007607 (0.001895) | 0.546539 / 0.226044 (0.320495) | 5.464242 / 2.268929 (3.195314) | 2.894101 / 55.444624 (-52.550524) | 2.513761 / 6.876477 (-4.362715) | 2.719843 / 2.142072 (0.577770) | 0.678828 / 4.805227 (-4.126399) | 0.157839 / 6.500664 (-6.342825) | 0.071305 / 0.075469 (-0.004164) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.496879 / 1.841788 (-0.344909) | 22.214452 / 8.074308 (14.140144) | 17.707541 / 10.191392 (7.516149) | 0.197008 / 0.680424 (-0.483416) | 0.024883 / 0.534201 (-0.509318) | 0.493611 / 0.579283 (-0.085672) | 0.500677 / 0.434364 (0.066313) | 0.569381 / 0.540337 (0.029044) | 0.773950 / 1.386936 (-0.612986) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007337 / 0.011353 (-0.004015) | 0.004572 / 0.011008 (-0.006436) | 0.091123 / 0.038508 (0.052615) | 0.079762 / 0.023109 (0.056652) | 0.450527 / 0.275898 (0.174629) | 0.525097 / 0.323480 (0.201617) | 0.005873 / 0.007986 (-0.002112) | 0.003797 / 0.004328 (-0.000532) | 0.076259 / 0.004250 (0.072009) | 0.062745 / 0.037052 (0.025692) | 0.465553 / 0.258489 (0.207064) | 0.546026 / 0.293841 (0.252186) | 0.035638 / 0.128546 (-0.092909) | 0.010086 / 0.075646 (-0.065560) | 0.109269 / 0.419271 (-0.310002) | 0.056765 / 0.043533 (0.013233) | 0.440887 / 0.255139 (0.185748) | 0.513325 / 0.283200 (0.230125) | 0.027206 / 0.141683 (-0.114476) | 1.863564 / 1.452155 (0.411409) | 1.918206 / 1.492716 (0.425490) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.266479 / 0.018006 (0.248473) | 0.487971 / 0.000490 (0.487481) | 0.012246 / 0.000200 (0.012046) | 0.000119 / 0.000054 (0.000065) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035281 / 0.037411 (-0.002130) | 0.102991 / 0.014526 (0.088465) | 0.114638 / 0.176557 (-0.061919) | 0.184117 / 0.737135 (-0.553018) | 0.117943 / 0.296338 (-0.178396) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.497897 / 0.215209 (0.282688) | 4.973806 / 2.077655 (2.896151) | 2.596146 / 1.504120 (1.092026) | 2.419694 / 1.541195 (0.878499) | 2.525784 / 1.468490 (1.057294) | 0.568021 / 4.584777 (-4.016756) | 4.296431 / 3.745712 (0.550719) | 3.690682 / 5.269862 (-1.579179) | 2.345965 / 4.565676 (-2.219712) | 0.066859 / 0.424275 (-0.357416) | 0.009093 / 0.007607 (0.001486) | 0.582616 / 0.226044 (0.356571) | 5.826528 / 2.268929 (3.557600) | 3.253222 / 55.444624 (-52.191403) | 2.798447 / 6.876477 (-4.078030) | 3.054609 / 2.142072 (0.912537) | 0.678816 / 4.805227 (-4.126411) | 0.157966 / 6.500664 (-6.342698) | 0.073797 / 0.075469 (-0.001672) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.599480 / 1.841788 (-0.242308) | 23.249738 / 8.074308 (15.175430) | 16.965406 / 10.191392 (6.774014) | 0.171390 / 0.680424 (-0.509034) | 0.021810 / 0.534201 (-0.512391) | 0.483339 / 0.579283 (-0.095944) | 0.496615 / 0.434364 (0.062251) | 0.583786 / 0.540337 (0.043448) | 0.741699 / 1.386936 (-0.645237) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#7935cd2e564f5d1c66ed1acf731703724ba7a287 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006054 / 0.011353 (-0.005299) | 0.003706 / 0.011008 (-0.007302) | 0.080060 / 0.038508 (0.041552) | 0.061479 / 0.023109 (0.038370) | 0.327981 / 0.275898 (0.052083) | 0.356930 / 0.323480 (0.033450) | 0.004671 / 0.007986 (-0.003315) | 0.002901 / 0.004328 (-0.001428) | 0.062425 / 0.004250 (0.058174) | 0.046310 / 0.037052 (0.009258) | 0.323657 / 0.258489 (0.065168) | 0.370130 / 0.293841 (0.076289) | 0.027151 / 0.128546 (-0.101395) | 0.007850 / 0.075646 (-0.067797) | 0.262300 / 0.419271 (-0.156971) | 0.045456 / 0.043533 (0.001923) | 0.325569 / 0.255139 (0.070430) | 0.352962 / 0.283200 (0.069762) | 0.020156 / 0.141683 (-0.121527) | 1.429404 / 1.452155 (-0.022750) | 1.615032 / 1.492716 (0.122316) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.187309 / 0.018006 (0.169303) | 0.428848 / 0.000490 (0.428358) | 0.003599 / 0.000200 (0.003399) | 0.000069 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023260 / 0.037411 (-0.014151) | 0.072467 / 0.014526 (0.057941) | 0.082398 / 0.176557 (-0.094159) | 0.142573 / 0.737135 (-0.594562) | 0.082570 / 0.296338 (-0.213768) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.426503 / 0.215209 (0.211294) | 4.267875 / 2.077655 (2.190220) | 2.189762 / 1.504120 (0.685642) | 2.027992 / 1.541195 (0.486798) | 2.053211 / 1.468490 (0.584721) | 0.503850 / 4.584777 (-4.080927) | 3.086444 / 3.745712 (-0.659268) | 3.319492 / 5.269862 (-1.950370) | 2.070714 / 4.565676 (-2.494962) | 0.057591 / 0.424275 (-0.366684) | 0.006407 / 0.007607 (-0.001200) | 0.501145 / 0.226044 (0.275100) | 5.017753 / 2.268929 (2.748825) | 2.643145 / 55.444624 (-52.801479) | 2.327440 / 6.876477 (-4.549037) | 2.460250 / 2.142072 (0.318178) | 0.589397 / 4.805227 (-4.215830) | 0.124948 / 6.500664 (-6.375716) | 0.060450 / 0.075469 (-0.015020) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279870 / 1.841788 (-0.561918) | 18.115908 / 8.074308 (10.041600) | 13.570032 / 10.191392 (3.378640) | 0.132981 / 0.680424 (-0.547442) | 0.016942 / 0.534201 (-0.517259) | 0.333591 / 0.579283 (-0.245692) | 0.358844 / 0.434364 (-0.075520) | 0.395748 / 0.540337 (-0.144590) | 0.546213 / 1.386936 (-0.840723) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006062 / 0.011353 (-0.005291) | 0.003673 / 0.011008 (-0.007336) | 0.064726 / 0.038508 (0.026218) | 0.061854 / 0.023109 (0.038745) | 0.385343 / 0.275898 (0.109445) | 0.441284 / 0.323480 (0.117805) | 0.004830 / 0.007986 (-0.003156) | 0.002909 / 0.004328 (-0.001420) | 0.063874 / 0.004250 (0.059624) | 0.049331 / 0.037052 (0.012278) | 0.418484 / 0.258489 (0.159995) | 0.451397 / 0.293841 (0.157556) | 0.027665 / 0.128546 (-0.100881) | 0.008088 / 0.075646 (-0.067558) | 0.069625 / 0.419271 (-0.349646) | 0.043437 / 0.043533 (-0.000095) | 0.359789 / 0.255139 (0.104650) | 0.430206 / 0.283200 (0.147007) | 0.022308 / 0.141683 (-0.119375) | 1.461030 / 1.452155 (0.008875) | 1.513683 / 1.492716 (0.020966) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230958 / 0.018006 (0.212952) | 0.417553 / 0.000490 (0.417063) | 0.000802 / 0.000200 (0.000602) | 0.000066 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025421 / 0.037411 (-0.011991) | 0.077156 / 0.014526 (0.062630) | 0.087533 / 0.176557 (-0.089024) | 0.138048 / 0.737135 (-0.599087) | 0.089358 / 0.296338 (-0.206981) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.439172 / 0.215209 (0.223963) | 4.409509 / 2.077655 (2.331854) | 2.491270 / 1.504120 (0.987150) | 2.308446 / 1.541195 (0.767252) | 2.378440 / 1.468490 (0.909950) | 0.499834 / 4.584777 (-4.084943) | 3.083168 / 3.745712 (-0.662544) | 2.867543 / 5.269862 (-2.402318) | 1.876354 / 4.565676 (-2.689323) | 0.057092 / 0.424275 (-0.367183) | 0.006955 / 0.007607 (-0.000653) | 0.513799 / 0.226044 (0.287754) | 5.126660 / 2.268929 (2.857731) | 2.917348 / 55.444624 (-52.527277) | 2.508035 / 6.876477 (-4.368441) | 2.698089 / 2.142072 (0.556016) | 0.586828 / 4.805227 (-4.218399) | 0.124740 / 6.500664 (-6.375924) | 0.062276 / 0.075469 (-0.013193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.291624 / 1.841788 (-0.550164) | 18.199968 / 8.074308 (10.125660) | 13.888139 / 10.191392 (3.696747) | 0.162955 / 0.680424 (-0.517469) | 0.017343 / 0.534201 (-0.516858) | 0.334683 / 0.579283 (-0.244600) | 0.352708 / 0.434364 (-0.081656) | 0.400629 / 0.540337 (-0.139708) | 0.539497 / 1.386936 (-0.847439) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#e7976db7fe22c6b93a869488d07b8137ea6a0db4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007500 / 0.011353 (-0.003853) | 0.004498 / 0.011008 (-0.006510) | 0.100239 / 0.038508 (0.061731) | 0.083424 / 0.023109 (0.060315) | 0.366664 / 0.275898 (0.090766) | 0.406641 / 0.323480 (0.083161) | 0.004577 / 0.007986 (-0.003409) | 0.004809 / 0.004328 (0.000480) | 0.076898 / 0.004250 (0.072647) | 0.064021 / 0.037052 (0.026969) | 0.375836 / 0.258489 (0.117347) | 0.413008 / 0.293841 (0.119167) | 0.036010 / 0.128546 (-0.092537) | 0.009655 / 0.075646 (-0.065991) | 0.342595 / 0.419271 (-0.076677) | 0.061846 / 0.043533 (0.018313) | 0.376543 / 0.255139 (0.121404) | 0.395858 / 0.283200 (0.112659) | 0.026792 / 0.141683 (-0.114891) | 1.775569 / 1.452155 (0.323414) | 1.865077 / 1.492716 (0.372360) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.221521 / 0.018006 (0.203514) | 0.474604 / 0.000490 (0.474114) | 0.004354 / 0.000200 (0.004154) | 0.000090 / 0.000054 (0.000035) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032947 / 0.037411 (-0.004464) | 0.100454 / 0.014526 (0.085928) | 0.111955 / 0.176557 (-0.064602) | 0.179752 / 0.737135 (-0.557383) | 0.114282 / 0.296338 (-0.182056) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.458261 / 0.215209 (0.243052) | 4.563536 / 2.077655 (2.485881) | 2.231928 / 1.504120 (0.727808) | 2.036751 / 1.541195 (0.495556) | 2.170413 / 1.468490 (0.701923) | 0.570825 / 4.584777 (-4.013952) | 4.505762 / 3.745712 (0.760050) | 5.033461 / 5.269862 (-0.236401) | 2.704989 / 4.565676 (-1.860687) | 0.067011 / 0.424275 (-0.357264) | 0.008568 / 0.007607 (0.000961) | 0.545151 / 0.226044 (0.319106) | 5.438984 / 2.268929 (3.170055) | 2.771818 / 55.444624 (-52.672806) | 2.393082 / 6.876477 (-4.483395) | 2.467173 / 2.142072 (0.325101) | 0.678849 / 4.805227 (-4.126379) | 0.160480 / 6.500664 (-6.340184) | 0.073681 / 0.075469 (-0.001788) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.532272 / 1.841788 (-0.309516) | 22.548741 / 8.074308 (14.474433) | 17.091044 / 10.191392 (6.899652) | 0.172100 / 0.680424 (-0.508324) | 0.022220 / 0.534201 (-0.511981) | 0.467871 / 0.579283 (-0.111412) | 0.491135 / 0.434364 (0.056771) | 0.548433 / 0.540337 (0.008096) | 0.733340 / 1.386936 (-0.653596) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007593 / 0.011353 (-0.003760) | 0.004656 / 0.011008 (-0.006352) | 0.076940 / 0.038508 (0.038431) | 0.085183 / 0.023109 (0.062073) | 0.447178 / 0.275898 (0.171280) | 0.469545 / 0.323480 (0.146065) | 0.006023 / 0.007986 (-0.001962) | 0.003808 / 0.004328 (-0.000520) | 0.076767 / 0.004250 (0.072517) | 0.065713 / 0.037052 (0.028661) | 0.445573 / 0.258489 (0.187084) | 0.481689 / 0.293841 (0.187848) | 0.036893 / 0.128546 (-0.091654) | 0.009976 / 0.075646 (-0.065670) | 0.084443 / 0.419271 (-0.334829) | 0.058829 / 0.043533 (0.015297) | 0.429291 / 0.255139 (0.174152) | 0.454016 / 0.283200 (0.170816) | 0.027289 / 0.141683 (-0.114394) | 1.806786 / 1.452155 (0.354632) | 1.887680 / 1.492716 (0.394964) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.241012 / 0.018006 (0.223006) | 0.470629 / 0.000490 (0.470139) | 0.003213 / 0.000200 (0.003013) | 0.000107 / 0.000054 (0.000052) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036896 / 0.037411 (-0.000515) | 0.106932 / 0.014526 (0.092406) | 0.120333 / 0.176557 (-0.056223) | 0.186271 / 0.737135 (-0.550865) | 0.121581 / 0.296338 (-0.174758) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.507782 / 0.215209 (0.292573) | 5.062932 / 2.077655 (2.985278) | 2.689539 / 1.504120 (1.185419) | 2.482978 / 1.541195 (0.941784) | 2.561320 / 1.468490 (1.092830) | 0.570664 / 4.584777 (-4.014113) | 4.346051 / 3.745712 (0.600339) | 6.479374 / 5.269862 (1.209513) | 4.096483 / 4.565676 (-0.469194) | 0.067564 / 0.424275 (-0.356711) | 0.009147 / 0.007607 (0.001540) | 0.596059 / 0.226044 (0.370015) | 5.963223 / 2.268929 (3.694295) | 3.201039 / 55.444624 (-52.243585) | 2.816581 / 6.876477 (-4.059896) | 3.047821 / 2.142072 (0.905748) | 0.687749 / 4.805227 (-4.117478) | 0.158174 / 6.500664 (-6.342490) | 0.073329 / 0.075469 (-0.002140) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.601346 / 1.841788 (-0.240441) | 23.712210 / 8.074308 (15.637902) | 16.567272 / 10.191392 (6.375880) | 0.224745 / 0.680424 (-0.455679) | 0.021662 / 0.534201 (-0.512539) | 0.471427 / 0.579283 (-0.107856) | 0.498751 / 0.434364 (0.064387) | 0.572047 / 0.540337 (0.031710) | 0.821868 / 1.386936 (-0.565068) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#34d0c9027c750adc89f3d04a6bf2e9cb95915da4 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006371 / 0.011353 (-0.004981) | 0.003749 / 0.011008 (-0.007259) | 0.084155 / 0.038508 (0.045647) | 0.072450 / 0.023109 (0.049340) | 0.308002 / 0.275898 (0.032104) | 0.340471 / 0.323480 (0.016991) | 0.005054 / 0.007986 (-0.002931) | 0.003176 / 0.004328 (-0.001152) | 0.064867 / 0.004250 (0.060616) | 0.054305 / 0.037052 (0.017252) | 0.321047 / 0.258489 (0.062558) | 0.345999 / 0.293841 (0.052158) | 0.030507 / 0.128546 (-0.098039) | 0.008299 / 0.075646 (-0.067347) | 0.287682 / 0.419271 (-0.131590) | 0.052048 / 0.043533 (0.008515) | 0.308322 / 0.255139 (0.053183) | 0.333220 / 0.283200 (0.050020) | 0.022698 / 0.141683 (-0.118985) | 1.474033 / 1.452155 (0.021879) | 1.544790 / 1.492716 (0.052074) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.200612 / 0.018006 (0.182606) | 0.450934 / 0.000490 (0.450445) | 0.005383 / 0.000200 (0.005183) | 0.000200 / 0.000054 (0.000145) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027759 / 0.037411 (-0.009652) | 0.080935 / 0.014526 (0.066409) | 0.093041 / 0.176557 (-0.083516) | 0.148643 / 0.737135 (-0.588492) | 0.093463 / 0.296338 (-0.202876) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.381653 / 0.215209 (0.166444) | 3.810699 / 2.077655 (1.733044) | 1.866858 / 1.504120 (0.362738) | 1.716985 / 1.541195 (0.175790) | 1.788071 / 1.468490 (0.319581) | 0.481130 / 4.584777 (-4.103647) | 3.529798 / 3.745712 (-0.215914) | 3.982037 / 5.269862 (-1.287824) | 2.324866 / 4.565676 (-2.240811) | 0.056767 / 0.424275 (-0.367508) | 0.007306 / 0.007607 (-0.000301) | 0.459472 / 0.226044 (0.233428) | 4.602808 / 2.268929 (2.333879) | 2.332014 / 55.444624 (-53.112610) | 2.044858 / 6.876477 (-4.831619) | 2.204165 / 2.142072 (0.062093) | 0.577946 / 4.805227 (-4.227281) | 0.130900 / 6.500664 (-6.369764) | 0.059054 / 0.075469 (-0.016415) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.245211 / 1.841788 (-0.596576) | 19.176397 / 8.074308 (11.102089) | 13.995280 / 10.191392 (3.803888) | 0.171743 / 0.680424 (-0.508681) | 0.018038 / 0.534201 (-0.516163) | 0.392338 / 0.579283 (-0.186945) | 0.419370 / 0.434364 (-0.014994) | 0.477829 / 0.540337 (-0.062508) | 0.677409 / 1.386936 (-0.709527) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006513 / 0.011353 (-0.004840) | 0.003984 / 0.011008 (-0.007024) | 0.064516 / 0.038508 (0.026008) | 0.070504 / 0.023109 (0.047395) | 0.384509 / 0.275898 (0.108611) | 0.410564 / 0.323480 (0.087084) | 0.005310 / 0.007986 (-0.002675) | 0.003268 / 0.004328 (-0.001061) | 0.064684 / 0.004250 (0.060433) | 0.055367 / 0.037052 (0.018315) | 0.399108 / 0.258489 (0.140619) | 0.422740 / 0.293841 (0.128900) | 0.031624 / 0.128546 (-0.096922) | 0.008617 / 0.075646 (-0.067030) | 0.070929 / 0.419271 (-0.348342) | 0.049146 / 0.043533 (0.005613) | 0.385492 / 0.255139 (0.130353) | 0.407434 / 0.283200 (0.124234) | 0.021972 / 0.141683 (-0.119711) | 1.496135 / 1.452155 (0.043980) | 1.533739 / 1.492716 (0.041023) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.226218 / 0.018006 (0.208211) | 0.443176 / 0.000490 (0.442686) | 0.000376 / 0.000200 (0.000176) | 0.000055 / 0.000054 (0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030315 / 0.037411 (-0.007097) | 0.086416 / 0.014526 (0.071890) | 0.097725 / 0.176557 (-0.078831) | 0.150407 / 0.737135 (-0.586728) | 0.099914 / 0.296338 (-0.196424) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.409807 / 0.215209 (0.194598) | 4.099086 / 2.077655 (2.021431) | 2.103160 / 1.504120 (0.599040) | 1.927927 / 1.541195 (0.386733) | 1.977751 / 1.468490 (0.509261) | 0.476995 / 4.584777 (-4.107781) | 3.521835 / 3.745712 (-0.223877) | 3.237695 / 5.269862 (-2.032167) | 1.995953 / 4.565676 (-2.569724) | 0.056208 / 0.424275 (-0.368068) | 0.007660 / 0.007607 (0.000053) | 0.483537 / 0.226044 (0.257492) | 4.833974 / 2.268929 (2.565046) | 2.589115 / 55.444624 (-52.855510) | 2.228076 / 6.876477 (-4.648401) | 2.395271 / 2.142072 (0.253198) | 0.577534 / 4.805227 (-4.227694) | 0.131432 / 6.500664 (-6.369232) | 0.060999 / 0.075469 (-0.014471) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.356043 / 1.841788 (-0.485745) | 19.470401 / 8.074308 (11.396093) | 14.091266 / 10.191392 (3.899874) | 0.166809 / 0.680424 (-0.513615) | 0.018782 / 0.534201 (-0.515419) | 0.394916 / 0.579283 (-0.184367) | 0.411378 / 0.434364 (-0.022986) | 0.466886 / 0.540337 (-0.073451) | 0.617369 / 1.386936 (-0.769567) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#601ae6c7baff33a600fd10b12940966024fd2221 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007590 / 0.011353 (-0.003762) | 0.004068 / 0.011008 (-0.006941) | 0.105479 / 0.038508 (0.066971) | 0.085614 / 0.023109 (0.062505) | 0.384325 / 0.275898 (0.108427) | 0.467867 / 0.323480 (0.144387) | 0.004652 / 0.007986 (-0.003333) | 0.005445 / 0.004328 (0.001117) | 0.079604 / 0.004250 (0.075353) | 0.066031 / 0.037052 (0.028978) | 0.426184 / 0.258489 (0.167695) | 0.480712 / 0.293841 (0.186871) | 0.037837 / 0.128546 (-0.090709) | 0.009765 / 0.075646 (-0.065882) | 0.351316 / 0.419271 (-0.067955) | 0.063634 / 0.043533 (0.020101) | 0.420297 / 0.255139 (0.165158) | 0.449169 / 0.283200 (0.165969) | 0.030947 / 0.141683 (-0.110736) | 1.840184 / 1.452155 (0.388029) | 1.934074 / 1.492716 (0.441357) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223483 / 0.018006 (0.205477) | 0.521086 / 0.000490 (0.520596) | 0.000379 / 0.000200 (0.000179) | 0.000065 / 0.000054 (0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032011 / 0.037411 (-0.005400) | 0.101474 / 0.014526 (0.086948) | 0.108652 / 0.176557 (-0.067904) | 0.173340 / 0.737135 (-0.563796) | 0.114186 / 0.296338 (-0.182153) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.478020 / 0.215209 (0.262811) | 4.645400 / 2.077655 (2.567746) | 2.590763 / 1.504120 (1.086643) | 2.383002 / 1.541195 (0.841807) | 2.482550 / 1.468490 (1.014060) | 0.572417 / 4.584777 (-4.012360) | 4.233436 / 3.745712 (0.487724) | 4.858823 / 5.269862 (-0.411038) | 2.838913 / 4.565676 (-1.726764) | 0.070010 / 0.424275 (-0.354265) | 0.009602 / 0.007607 (0.001995) | 0.538735 / 0.226044 (0.312691) | 5.534340 / 2.268929 (3.265411) | 2.915006 / 55.444624 (-52.529619) | 2.625132 / 6.876477 (-4.251345) | 2.537838 / 2.142072 (0.395766) | 0.667870 / 4.805227 (-4.137357) | 0.146330 / 6.500664 (-6.354334) | 0.071631 / 0.075469 (-0.003838) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.594686 / 1.841788 (-0.247101) | 22.311113 / 8.074308 (14.236804) | 17.603983 / 10.191392 (7.412591) | 0.195995 / 0.680424 (-0.484428) | 0.022254 / 0.534201 (-0.511947) | 0.479661 / 0.579283 (-0.099622) | 0.463626 / 0.434364 (0.029262) | 0.483465 / 0.540337 (-0.056873) | 0.676141 / 1.386936 (-0.710795) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006146 / 0.011353 (-0.005207) | 0.004856 / 0.011008 (-0.006152) | 0.067506 / 0.038508 (0.028998) | 0.073968 / 0.023109 (0.050859) | 0.470013 / 0.275898 (0.194115) | 0.479022 / 0.323480 (0.155542) | 0.005972 / 0.007986 (-0.002014) | 0.003846 / 0.004328 (-0.000483) | 0.075141 / 0.004250 (0.070890) | 0.058597 / 0.037052 (0.021544) | 0.481454 / 0.258489 (0.222965) | 0.515634 / 0.293841 (0.221793) | 0.034979 / 0.128546 (-0.093567) | 0.010385 / 0.075646 (-0.065261) | 0.072649 / 0.419271 (-0.346622) | 0.058183 / 0.043533 (0.014650) | 0.462138 / 0.255139 (0.206999) | 0.476093 / 0.283200 (0.192893) | 0.032918 / 0.141683 (-0.108765) | 1.820530 / 1.452155 (0.368375) | 1.626360 / 1.492716 (0.133644) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.208970 / 0.018006 (0.190964) | 0.492478 / 0.000490 (0.491988) | 0.005487 / 0.000200 (0.005287) | 0.000140 / 0.000054 (0.000086) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.037896 / 0.037411 (0.000484) | 0.089752 / 0.014526 (0.075227) | 0.107445 / 0.176557 (-0.069111) | 0.181260 / 0.737135 (-0.555876) | 0.105700 / 0.296338 (-0.190639) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.495031 / 0.215209 (0.279821) | 4.806939 / 2.077655 (2.729284) | 2.227928 / 1.504120 (0.723808) | 2.067117 / 1.541195 (0.525922) | 2.348982 / 1.468490 (0.880492) | 0.567201 / 4.584777 (-4.017576) | 4.166592 / 3.745712 (0.420880) | 3.654329 / 5.269862 (-1.615533) | 2.331092 / 4.565676 (-2.234584) | 0.062212 / 0.424275 (-0.362063) | 0.008775 / 0.007607 (0.001168) | 0.515413 / 0.226044 (0.289369) | 5.449300 / 2.268929 (3.180371) | 3.206574 / 55.444624 (-52.238050) | 2.600455 / 6.876477 (-4.276022) | 3.041162 / 2.142072 (0.899089) | 0.681899 / 4.805227 (-4.123328) | 0.155400 / 6.500664 (-6.345265) | 0.073933 / 0.075469 (-0.001537) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.572329 / 1.841788 (-0.269459) | 23.638519 / 8.074308 (15.564211) | 17.145663 / 10.191392 (6.954271) | 0.232690 / 0.680424 (-0.447734) | 0.028620 / 0.534201 (-0.505581) | 0.488105 / 0.579283 (-0.091178) | 0.490365 / 0.434364 (0.056001) | 0.599501 / 0.540337 (0.059164) | 0.708101 / 1.386936 (-0.678835) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#4a761315900880a25b347ad19b78bd567cfce1f0 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005947 / 0.011353 (-0.005406) | 0.003577 / 0.011008 (-0.007431) | 0.081631 / 0.038508 (0.043122) | 0.058651 / 0.023109 (0.035541) | 0.342742 / 0.275898 (0.066843) | 0.384130 / 0.323480 (0.060650) | 0.004620 / 0.007986 (-0.003366) | 0.002885 / 0.004328 (-0.001444) | 0.063698 / 0.004250 (0.059448) | 0.048953 / 0.037052 (0.011901) | 0.367880 / 0.258489 (0.109391) | 0.407050 / 0.293841 (0.113209) | 0.027242 / 0.128546 (-0.101305) | 0.007914 / 0.075646 (-0.067733) | 0.262156 / 0.419271 (-0.157116) | 0.044750 / 0.043533 (0.001218) | 0.351613 / 0.255139 (0.096474) | 0.380284 / 0.283200 (0.097084) | 0.020080 / 0.141683 (-0.121603) | 1.498101 / 1.452155 (0.045946) | 1.543608 / 1.492716 (0.050892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.180014 / 0.018006 (0.162008) | 0.436172 / 0.000490 (0.435682) | 0.003694 / 0.000200 (0.003494) | 0.000071 / 0.000054 (0.000017) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024389 / 0.037411 (-0.013022) | 0.072874 / 0.014526 (0.058348) | 0.083469 / 0.176557 (-0.093088) | 0.144600 / 0.737135 (-0.592536) | 0.084229 / 0.296338 (-0.212110) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.391636 / 0.215209 (0.176427) | 3.906941 / 2.077655 (1.829286) | 1.901944 / 1.504120 (0.397825) | 1.762702 / 1.541195 (0.221507) | 1.817970 / 1.468490 (0.349480) | 0.500345 / 4.584777 (-4.084432) | 3.011351 / 3.745712 (-0.734361) | 4.417763 / 5.269862 (-0.852098) | 2.689744 / 4.565676 (-1.875933) | 0.057765 / 0.424275 (-0.366511) | 0.006412 / 0.007607 (-0.001195) | 0.468156 / 0.226044 (0.242112) | 4.664975 / 2.268929 (2.396047) | 2.323355 / 55.444624 (-53.121270) | 1.984280 / 6.876477 (-4.892197) | 2.165215 / 2.142072 (0.023142) | 0.586950 / 4.805227 (-4.218278) | 0.124363 / 6.500664 (-6.376301) | 0.060702 / 0.075469 (-0.014767) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.238870 / 1.841788 (-0.602917) | 18.587360 / 8.074308 (10.513052) | 13.831674 / 10.191392 (3.640282) | 0.143542 / 0.680424 (-0.536882) | 0.016913 / 0.534201 (-0.517288) | 0.332314 / 0.579283 (-0.246969) | 0.345419 / 0.434364 (-0.088945) | 0.381257 / 0.540337 (-0.159081) | 0.537844 / 1.386936 (-0.849092) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006294 / 0.011353 (-0.005059) | 0.003714 / 0.011008 (-0.007294) | 0.062684 / 0.038508 (0.024176) | 0.063520 / 0.023109 (0.040411) | 0.389591 / 0.275898 (0.113693) | 0.444278 / 0.323480 (0.120798) | 0.004825 / 0.007986 (-0.003160) | 0.003010 / 0.004328 (-0.001318) | 0.062767 / 0.004250 (0.058517) | 0.051739 / 0.037052 (0.014686) | 0.434299 / 0.258489 (0.175810) | 0.452003 / 0.293841 (0.158162) | 0.027375 / 0.128546 (-0.101171) | 0.008135 / 0.075646 (-0.067511) | 0.067401 / 0.419271 (-0.351871) | 0.042752 / 0.043533 (-0.000780) | 0.367633 / 0.255139 (0.112494) | 0.433039 / 0.283200 (0.149840) | 0.021086 / 0.141683 (-0.120597) | 1.488024 / 1.452155 (0.035870) | 1.507767 / 1.492716 (0.015050) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230046 / 0.018006 (0.212040) | 0.428085 / 0.000490 (0.427595) | 0.002188 / 0.000200 (0.001988) | 0.000070 / 0.000054 (0.000015) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026705 / 0.037411 (-0.010706) | 0.082466 / 0.014526 (0.067940) | 0.089378 / 0.176557 (-0.087179) | 0.147287 / 0.737135 (-0.589849) | 0.090426 / 0.296338 (-0.205913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.430882 / 0.215209 (0.215672) | 4.296224 / 2.077655 (2.218569) | 2.229982 / 1.504120 (0.725862) | 2.048506 / 1.541195 (0.507311) | 2.129514 / 1.468490 (0.661024) | 0.502964 / 4.584777 (-4.081813) | 3.048125 / 3.745712 (-0.697587) | 4.208636 / 5.269862 (-1.061226) | 2.594015 / 4.565676 (-1.971661) | 0.057967 / 0.424275 (-0.366308) | 0.006875 / 0.007607 (-0.000732) | 0.513872 / 0.226044 (0.287828) | 5.126435 / 2.268929 (2.857506) | 2.691278 / 55.444624 (-52.753346) | 2.361723 / 6.876477 (-4.514754) | 2.511213 / 2.142072 (0.369141) | 0.593558 / 4.805227 (-4.211670) | 0.129332 / 6.500664 (-6.371332) | 0.064051 / 0.075469 (-0.011418) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.289049 / 1.841788 (-0.552739) | 18.912363 / 8.074308 (10.838055) | 14.226500 / 10.191392 (4.035108) | 0.131392 / 0.680424 (-0.549032) | 0.016750 / 0.534201 (-0.517451) | 0.330078 / 0.579283 (-0.249205) | 0.347588 / 0.434364 (-0.086776) | 0.383234 / 0.540337 (-0.157103) | 0.510967 / 1.386936 (-0.875969) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d7892beb30bab0633b84398c5ea43d7e69fe38cc \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005974 / 0.011353 (-0.005379) | 0.003691 / 0.011008 (-0.007317) | 0.079410 / 0.038508 (0.040902) | 0.061769 / 0.023109 (0.038660) | 0.323310 / 0.275898 (0.047412) | 0.354325 / 0.323480 (0.030845) | 0.004794 / 0.007986 (-0.003191) | 0.002899 / 0.004328 (-0.001430) | 0.062104 / 0.004250 (0.057854) | 0.048973 / 0.037052 (0.011921) | 0.326497 / 0.258489 (0.068008) | 0.361347 / 0.293841 (0.067506) | 0.026741 / 0.128546 (-0.101805) | 0.007936 / 0.075646 (-0.067710) | 0.259168 / 0.419271 (-0.160104) | 0.044859 / 0.043533 (0.001327) | 0.319342 / 0.255139 (0.064203) | 0.343711 / 0.283200 (0.060511) | 0.022298 / 0.141683 (-0.119384) | 1.451595 / 1.452155 (-0.000560) | 1.573730 / 1.492716 (0.081014) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.173086 / 0.018006 (0.155080) | 0.432400 / 0.000490 (0.431910) | 0.003739 / 0.000200 (0.003539) | 0.000073 / 0.000054 (0.000019) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024477 / 0.037411 (-0.012934) | 0.073463 / 0.014526 (0.058937) | 0.083410 / 0.176557 (-0.093146) | 0.144760 / 0.737135 (-0.592376) | 0.084199 / 0.296338 (-0.212140) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.388251 / 0.215209 (0.173042) | 3.875375 / 2.077655 (1.797720) | 1.875515 / 1.504120 (0.371395) | 1.729282 / 1.541195 (0.188087) | 1.784732 / 1.468490 (0.316242) | 0.496985 / 4.584777 (-4.087792) | 3.030276 / 3.745712 (-0.715436) | 2.813192 / 5.269862 (-2.456669) | 1.868647 / 4.565676 (-2.697030) | 0.057376 / 0.424275 (-0.366899) | 0.006463 / 0.007607 (-0.001144) | 0.462153 / 0.226044 (0.236108) | 4.586583 / 2.268929 (2.317654) | 2.287730 / 55.444624 (-53.156894) | 1.972177 / 6.876477 (-4.904299) | 2.151592 / 2.142072 (0.009520) | 0.587169 / 4.805227 (-4.218058) | 0.127063 / 6.500664 (-6.373601) | 0.060297 / 0.075469 (-0.015172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.267651 / 1.841788 (-0.574136) | 18.426011 / 8.074308 (10.351703) | 14.050470 / 10.191392 (3.859078) | 0.148063 / 0.680424 (-0.532361) | 0.017112 / 0.534201 (-0.517089) | 0.330051 / 0.579283 (-0.249232) | 0.358730 / 0.434364 (-0.075634) | 0.392365 / 0.540337 (-0.147972) | 0.534650 / 1.386936 (-0.852286) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005936 / 0.011353 (-0.005417) | 0.003652 / 0.011008 (-0.007356) | 0.063066 / 0.038508 (0.024558) | 0.060617 / 0.023109 (0.037507) | 0.388293 / 0.275898 (0.112395) | 0.411422 / 0.323480 (0.087942) | 0.004691 / 0.007986 (-0.003295) | 0.002857 / 0.004328 (-0.001472) | 0.064198 / 0.004250 (0.059947) | 0.049124 / 0.037052 (0.012071) | 0.403601 / 0.258489 (0.145112) | 0.413619 / 0.293841 (0.119778) | 0.027279 / 0.128546 (-0.101267) | 0.008072 / 0.075646 (-0.067575) | 0.067890 / 0.419271 (-0.351381) | 0.041866 / 0.043533 (-0.001667) | 0.393438 / 0.255139 (0.138299) | 0.402865 / 0.283200 (0.119666) | 0.023381 / 0.141683 (-0.118302) | 1.496324 / 1.452155 (0.044170) | 1.538080 / 1.492716 (0.045364) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.212065 / 0.018006 (0.194059) | 0.410511 / 0.000490 (0.410021) | 0.001236 / 0.000200 (0.001036) | 0.000067 / 0.000054 (0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026012 / 0.037411 (-0.011399) | 0.076592 / 0.014526 (0.062066) | 0.085963 / 0.176557 (-0.090594) | 0.137803 / 0.737135 (-0.599332) | 0.087594 / 0.296338 (-0.208745) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.434283 / 0.215209 (0.219074) | 4.345478 / 2.077655 (2.267824) | 2.400954 / 1.504120 (0.896834) | 2.282024 / 1.541195 (0.740829) | 2.414247 / 1.468490 (0.945757) | 0.501855 / 4.584777 (-4.082922) | 3.059433 / 3.745712 (-0.686279) | 2.811288 / 5.269862 (-2.458574) | 1.856839 / 4.565676 (-2.708838) | 0.058017 / 0.424275 (-0.366258) | 0.006844 / 0.007607 (-0.000763) | 0.515376 / 0.226044 (0.289332) | 5.148775 / 2.268929 (2.879847) | 2.930807 / 55.444624 (-52.513817) | 2.520532 / 6.876477 (-4.355944) | 2.746299 / 2.142072 (0.604227) | 0.590102 / 4.805227 (-4.215125) | 0.125747 / 6.500664 (-6.374917) | 0.061873 / 0.075469 (-0.013597) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.306247 / 1.841788 (-0.535541) | 18.366048 / 8.074308 (10.291740) | 13.855617 / 10.191392 (3.664225) | 0.150124 / 0.680424 (-0.530300) | 0.017189 / 0.534201 (-0.517012) | 0.336285 / 0.579283 (-0.242998) | 0.344985 / 0.434364 (-0.089379) | 0.397973 / 0.540337 (-0.142364) | 0.536142 / 1.386936 (-0.850794) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1ae24cf12054b4a512f198979b1ca7707bb99d56 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006401 / 0.011353 (-0.004952) | 0.003789 / 0.011008 (-0.007219) | 0.079516 / 0.038508 (0.041008) | 0.068279 / 0.023109 (0.045170) | 0.295691 / 0.275898 (0.019793) | 0.327208 / 0.323480 (0.003728) | 0.005070 / 0.007986 (-0.002915) | 0.003044 / 0.004328 (-0.001285) | 0.061411 / 0.004250 (0.057161) | 0.053227 / 0.037052 (0.016175) | 0.297368 / 0.258489 (0.038879) | 0.334740 / 0.293841 (0.040899) | 0.029459 / 0.128546 (-0.099087) | 0.008080 / 0.075646 (-0.067566) | 0.267344 / 0.419271 (-0.151927) | 0.049877 / 0.043533 (0.006344) | 0.293853 / 0.255139 (0.038714) | 0.319819 / 0.283200 (0.036620) | 0.022593 / 0.141683 (-0.119089) | 1.459054 / 1.452155 (0.006900) | 1.471250 / 1.492716 (-0.021466) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.194326 / 0.018006 (0.176320) | 0.443565 / 0.000490 (0.443075) | 0.003745 / 0.000200 (0.003545) | 0.000075 / 0.000054 (0.000021) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026640 / 0.037411 (-0.010772) | 0.077630 / 0.014526 (0.063104) | 0.089364 / 0.176557 (-0.087192) | 0.147327 / 0.737135 (-0.589809) | 0.089603 / 0.296338 (-0.206735) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.373758 / 0.215209 (0.158549) | 3.746778 / 2.077655 (1.669123) | 1.814991 / 1.504120 (0.310871) | 1.645650 / 1.541195 (0.104455) | 1.690752 / 1.468490 (0.222262) | 0.472117 / 4.584777 (-4.112660) | 3.457346 / 3.745712 (-0.288367) | 3.138869 / 5.269862 (-2.130993) | 1.934924 / 4.565676 (-2.630753) | 0.055709 / 0.424275 (-0.368566) | 0.006680 / 0.007607 (-0.000927) | 0.446874 / 0.226044 (0.220829) | 4.458409 / 2.268929 (2.189480) | 2.253932 / 55.444624 (-53.190693) | 2.007240 / 6.876477 (-4.869237) | 2.081687 / 2.142072 (-0.060386) | 0.563379 / 4.805227 (-4.241848) | 0.128694 / 6.500664 (-6.371970) | 0.057409 / 0.075469 (-0.018060) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212231 / 1.841788 (-0.629556) | 18.519121 / 8.074308 (10.444813) | 13.582243 / 10.191392 (3.390851) | 0.142488 / 0.680424 (-0.537936) | 0.017421 / 0.534201 (-0.516780) | 0.366864 / 0.579283 (-0.212419) | 0.401467 / 0.434364 (-0.032897) | 0.443659 / 0.540337 (-0.096679) | 0.618854 / 1.386936 (-0.768082) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006121 / 0.011353 (-0.005232) | 0.003690 / 0.011008 (-0.007318) | 0.060340 / 0.038508 (0.021832) | 0.067215 / 0.023109 (0.044106) | 0.382846 / 0.275898 (0.106948) | 0.415774 / 0.323480 (0.092294) | 0.004868 / 0.007986 (-0.003118) | 0.003108 / 0.004328 (-0.001221) | 0.060572 / 0.004250 (0.056321) | 0.050453 / 0.037052 (0.013401) | 0.400494 / 0.258489 (0.142005) | 0.424368 / 0.293841 (0.130527) | 0.030279 / 0.128546 (-0.098267) | 0.008151 / 0.075646 (-0.067495) | 0.066707 / 0.419271 (-0.352564) | 0.046118 / 0.043533 (0.002585) | 0.386697 / 0.255139 (0.131558) | 0.410156 / 0.283200 (0.126957) | 0.020688 / 0.141683 (-0.120995) | 1.418162 / 1.452155 (-0.033993) | 1.463057 / 1.492716 (-0.029659) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216081 / 0.018006 (0.198075) | 0.440541 / 0.000490 (0.440051) | 0.000371 / 0.000200 (0.000171) | 0.000054 / 0.000054 (-0.000000) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027763 / 0.037411 (-0.009648) | 0.082316 / 0.014526 (0.067791) | 0.094086 / 0.176557 (-0.082471) | 0.144738 / 0.737135 (-0.592398) | 0.094837 / 0.296338 (-0.201501) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396277 / 0.215209 (0.181068) | 3.958791 / 2.077655 (1.881136) | 2.021367 / 1.504120 (0.517247) | 1.860112 / 1.541195 (0.318917) | 1.886032 / 1.468490 (0.417541) | 0.468536 / 4.584777 (-4.116241) | 3.417950 / 3.745712 (-0.327762) | 4.849991 / 5.269862 (-0.419871) | 2.773935 / 4.565676 (-1.791742) | 0.055813 / 0.424275 (-0.368462) | 0.007053 / 0.007607 (-0.000554) | 0.470167 / 0.226044 (0.244122) | 4.702969 / 2.268929 (2.434041) | 2.474161 / 55.444624 (-52.970464) | 2.171256 / 6.876477 (-4.705220) | 2.315373 / 2.142072 (0.173301) | 0.589195 / 4.805227 (-4.216032) | 0.128237 / 6.500664 (-6.372427) | 0.058641 / 0.075469 (-0.016828) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.292947 / 1.841788 (-0.548841) | 18.851300 / 8.074308 (10.776992) | 14.089764 / 10.191392 (3.898372) | 0.164853 / 0.680424 (-0.515571) | 0.017281 / 0.534201 (-0.516920) | 0.359112 / 0.579283 (-0.220171) | 0.386696 / 0.434364 (-0.047668) | 0.428222 / 0.540337 (-0.112115) | 0.568659 / 1.386936 (-0.818277) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#563864ded894b468e2ba3f677ef79c5ab3fe65df \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006051 / 0.011353 (-0.005301) | 0.003654 / 0.011008 (-0.007355) | 0.080081 / 0.038508 (0.041572) | 0.062925 / 0.023109 (0.039815) | 0.358097 / 0.275898 (0.082199) | 0.405728 / 0.323480 (0.082248) | 0.005359 / 0.007986 (-0.002627) | 0.002820 / 0.004328 (-0.001508) | 0.063108 / 0.004250 (0.058858) | 0.049627 / 0.037052 (0.012575) | 0.397870 / 0.258489 (0.139381) | 0.437157 / 0.293841 (0.143316) | 0.027707 / 0.128546 (-0.100839) | 0.007911 / 0.075646 (-0.067735) | 0.260991 / 0.419271 (-0.158280) | 0.044771 / 0.043533 (0.001238) | 0.340230 / 0.255139 (0.085091) | 0.384925 / 0.283200 (0.101725) | 0.021369 / 0.141683 (-0.120314) | 1.431439 / 1.452155 (-0.020715) | 1.478794 / 1.492716 (-0.013922) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.182626 / 0.018006 (0.164620) | 0.435551 / 0.000490 (0.435061) | 0.003015 / 0.000200 (0.002815) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024703 / 0.037411 (-0.012708) | 0.073640 / 0.014526 (0.059114) | 0.084598 / 0.176557 (-0.091959) | 0.145810 / 0.737135 (-0.591325) | 0.085125 / 0.296338 (-0.211213) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.394539 / 0.215209 (0.179330) | 3.945882 / 2.077655 (1.868227) | 1.947166 / 1.504120 (0.443046) | 1.763305 / 1.541195 (0.222111) | 1.816208 / 1.468490 (0.347718) | 0.498880 / 4.584777 (-4.085897) | 3.098283 / 3.745712 (-0.647429) | 2.823474 / 5.269862 (-2.446388) | 1.873993 / 4.565676 (-2.691684) | 0.058097 / 0.424275 (-0.366179) | 0.006488 / 0.007607 (-0.001119) | 0.466711 / 0.226044 (0.240667) | 4.671520 / 2.268929 (2.402592) | 2.363381 / 55.444624 (-53.081243) | 2.052092 / 6.876477 (-4.824385) | 2.209212 / 2.142072 (0.067140) | 0.594650 / 4.805227 (-4.210577) | 0.125604 / 6.500664 (-6.375060) | 0.061511 / 0.075469 (-0.013958) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.226564 / 1.841788 (-0.615224) | 18.583605 / 8.074308 (10.509297) | 13.993091 / 10.191392 (3.801699) | 0.146185 / 0.680424 (-0.534239) | 0.016839 / 0.534201 (-0.517362) | 0.334116 / 0.579283 (-0.245167) | 0.360780 / 0.434364 (-0.073584) | 0.386008 / 0.540337 (-0.154329) | 0.643278 / 1.386936 (-0.743658) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006174 / 0.011353 (-0.005179) | 0.003658 / 0.011008 (-0.007350) | 0.063250 / 0.038508 (0.024742) | 0.063542 / 0.023109 (0.040433) | 0.366845 / 0.275898 (0.090947) | 0.409794 / 0.323480 (0.086314) | 0.005678 / 0.007986 (-0.002308) | 0.003061 / 0.004328 (-0.001268) | 0.063561 / 0.004250 (0.059311) | 0.052648 / 0.037052 (0.015596) | 0.378096 / 0.258489 (0.119607) | 0.410706 / 0.293841 (0.116865) | 0.027668 / 0.128546 (-0.100878) | 0.008045 / 0.075646 (-0.067601) | 0.068290 / 0.419271 (-0.350981) | 0.042602 / 0.043533 (-0.000930) | 0.364976 / 0.255139 (0.109837) | 0.395599 / 0.283200 (0.112400) | 0.022733 / 0.141683 (-0.118950) | 1.522473 / 1.452155 (0.070319) | 1.515891 / 1.492716 (0.023175) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.232554 / 0.018006 (0.214547) | 0.420702 / 0.000490 (0.420213) | 0.002161 / 0.000200 (0.001961) | 0.000064 / 0.000054 (0.000009) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026276 / 0.037411 (-0.011135) | 0.078504 / 0.014526 (0.063978) | 0.088989 / 0.176557 (-0.087567) | 0.144044 / 0.737135 (-0.593091) | 0.091074 / 0.296338 (-0.205265) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.420189 / 0.215209 (0.204980) | 4.189596 / 2.077655 (2.111941) | 2.316425 / 1.504120 (0.812305) | 2.186877 / 1.541195 (0.645682) | 2.259065 / 1.468490 (0.790575) | 0.502827 / 4.584777 (-4.081950) | 3.135266 / 3.745712 (-0.610446) | 2.838808 / 5.269862 (-2.431053) | 1.876519 / 4.565676 (-2.689158) | 0.057802 / 0.424275 (-0.366473) | 0.006824 / 0.007607 (-0.000784) | 0.500213 / 0.226044 (0.274168) | 4.999798 / 2.268929 (2.730869) | 2.627713 / 55.444624 (-52.816911) | 2.344263 / 6.876477 (-4.532214) | 2.415449 / 2.142072 (0.273376) | 0.593082 / 4.805227 (-4.212145) | 0.125787 / 6.500664 (-6.374877) | 0.062699 / 0.075469 (-0.012770) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.308219 / 1.841788 (-0.533569) | 18.703099 / 8.074308 (10.628791) | 13.976234 / 10.191392 (3.784842) | 0.144037 / 0.680424 (-0.536387) | 0.016592 / 0.534201 (-0.517609) | 0.333078 / 0.579283 (-0.246206) | 0.342317 / 0.434364 (-0.092047) | 0.396837 / 0.540337 (-0.143500) | 0.532641 / 1.386936 (-0.854295) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#14f6edd9222e577dccb962ed5338b79b73502fa5 \"CML watermark\")\n" ]
2023-07-13T15:41:44Z
2023-07-17T17:09:39Z
2023-07-17T17:01:00Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6028.diff", "html_url": "https://github.com/huggingface/datasets/pull/6028", "merged_at": "2023-07-17T17:01:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/6028.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6028" }
Thanks to @janineguo 's work in https://github.com/huggingface/datasets/pull/5919 which was needed to support HfFileSystem. Switching to `HfFileSystem` will help implementing optimization in data files resolution ## Implementation details I replaced all the from_hf_repo and from_local_or_remote in data_files.py to only use a new `from_patterns` which works for any fsspec path, including hf:// paths, https:// URLs and local paths. This simplifies the codebase since there is no logic duplication anymore when it comes to data files resolution. I added `_prepare_path_and_storage_options` which returns the right storage_options to use given a path and a `DownloadConfig`. This is the only place where the logic depends on the filesystem type that must be used. I also removed the `get_metadata_data_files_list ` and `get_patterns_and_data_files` functions added recently, since data files resolution is now handled using a common interface. ## New features hf:// paths are now supported in data_files ## Breaking changes DataFilesList and DataFilesDict: - use `str` paths instead of `Union[Path, Url]` - require posix paths for windows paths close https://github.com/huggingface/datasets/issues/6017
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6028/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6028/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/142
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/142/comments
https://api.github.com/repos/huggingface/datasets/issues/142/events
https://github.com/huggingface/datasets/pull/142
619,450,068
MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1
142
[WMT] Add all wmt
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
2020-05-16T11:28:46Z
2020-05-17T12:18:21Z
2020-05-17T12:18:20Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/142.diff", "html_url": "https://github.com/huggingface/datasets/pull/142", "merged_at": "2020-05-17T12:18:20Z", "patch_url": "https://github.com/huggingface/datasets/pull/142.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/142" }
This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng. The datasets are fully functional though for the "big" language pairs "de-en" and "fr-en". Overall I think the scripts are very messy and might need a big refactoring at some point. For now I think there are good to merge (most dataset configs can be used). I will add "cs", "ru" and "hi" when the manual data is available.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/142/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/142/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5485
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5485/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5485/comments
https://api.github.com/repos/huggingface/datasets/issues/5485/events
https://github.com/huggingface/datasets/pull/5485
1,563,002,829
PR_kwDODunzps5I2ER2
5,485
Add section in tutorial for IterableDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4", "events_url": "https://api.github.com/users/stevhliu/events{/privacy}", "followers_url": "https://api.github.com/users/stevhliu/followers", "following_url": "https://api.github.com/users/stevhliu/following{/other_user}", "gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stevhliu", "id": 59462357, "login": "stevhliu", "node_id": "MDQ6VXNlcjU5NDYyMzU3", "organizations_url": "https://api.github.com/users/stevhliu/orgs", "received_events_url": "https://api.github.com/users/stevhliu/received_events", "repos_url": "https://api.github.com/users/stevhliu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions", "type": "User", "url": "https://api.github.com/users/stevhliu" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.008492 / 0.011353 (-0.002861) | 0.004717 / 0.011008 (-0.006292) | 0.101111 / 0.038508 (0.062602) | 0.029129 / 0.023109 (0.006019) | 0.307564 / 0.275898 (0.031666) | 0.367038 / 0.323480 (0.043558) | 0.007105 / 0.007986 (-0.000881) | 0.003622 / 0.004328 (-0.000706) | 0.078370 / 0.004250 (0.074120) | 0.036960 / 0.037052 (-0.000093) | 0.315612 / 0.258489 (0.057123) | 0.353601 / 0.293841 (0.059760) | 0.032900 / 0.128546 (-0.095647) | 0.011405 / 0.075646 (-0.064241) | 0.322331 / 0.419271 (-0.096940) | 0.040823 / 0.043533 (-0.002710) | 0.306734 / 0.255139 (0.051595) | 0.328155 / 0.283200 (0.044955) | 0.087169 / 0.141683 (-0.054514) | 1.460543 / 1.452155 (0.008389) | 1.498094 / 1.492716 (0.005378) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.011863 / 0.018006 (-0.006143) | 0.416315 / 0.000490 (0.415826) | 0.003463 / 0.000200 (0.003263) | 0.000075 / 0.000054 (0.000020) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023219 / 0.037411 (-0.014192) | 0.096469 / 0.014526 (0.081943) | 0.105960 / 0.176557 (-0.070596) | 0.148993 / 0.737135 (-0.588142) | 0.108112 / 0.296338 (-0.188226) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.415662 / 0.215209 (0.200453) | 4.155111 / 2.077655 (2.077456) | 1.834943 / 1.504120 (0.330823) | 1.622752 / 1.541195 (0.081557) | 1.701630 / 1.468490 (0.233140) | 0.690596 / 4.584777 (-3.894181) | 3.399385 / 3.745712 (-0.346327) | 3.140521 / 5.269862 (-2.129341) | 1.609152 / 4.565676 (-2.956524) | 0.082132 / 0.424275 (-0.342143) | 0.012343 / 0.007607 (0.004735) | 0.532715 / 0.226044 (0.306670) | 5.323032 / 2.268929 (3.054104) | 2.326625 / 55.444624 (-53.118000) | 1.944263 / 6.876477 (-4.932213) | 1.994015 / 2.142072 (-0.148058) | 0.813805 / 4.805227 (-3.991422) | 0.149233 / 6.500664 (-6.351431) | 0.065318 / 0.075469 (-0.010151) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.212441 / 1.841788 (-0.629347) | 13.979069 / 8.074308 (5.904761) | 14.003998 / 10.191392 (3.812606) | 0.146956 / 0.680424 (-0.533468) | 0.028564 / 0.534201 (-0.505637) | 0.392370 / 0.579283 (-0.186913) | 0.399695 / 0.434364 (-0.034669) | 0.473481 / 0.540337 (-0.066856) | 0.562625 / 1.386936 (-0.824311) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006821 / 0.011353 (-0.004532) | 0.004570 / 0.011008 (-0.006438) | 0.076217 / 0.038508 (0.037709) | 0.028888 / 0.023109 (0.005779) | 0.345431 / 0.275898 (0.069533) | 0.389246 / 0.323480 (0.065766) | 0.005939 / 0.007986 (-0.002046) | 0.003356 / 0.004328 (-0.000973) | 0.075880 / 0.004250 (0.071629) | 0.041427 / 0.037052 (0.004374) | 0.344481 / 0.258489 (0.085992) | 0.398508 / 0.293841 (0.104667) | 0.031801 / 0.128546 (-0.096745) | 0.011763 / 0.075646 (-0.063884) | 0.085600 / 0.419271 (-0.333672) | 0.042656 / 0.043533 (-0.000876) | 0.345893 / 0.255139 (0.090754) | 0.376910 / 0.283200 (0.093711) | 0.092451 / 0.141683 (-0.049232) | 1.461222 / 1.452155 (0.009068) | 1.555822 / 1.492716 (0.063106) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235781 / 0.018006 (0.217774) | 0.418485 / 0.000490 (0.417995) | 0.005560 / 0.000200 (0.005360) | 0.000078 / 0.000054 (0.000023) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025410 / 0.037411 (-0.012001) | 0.103780 / 0.014526 (0.089254) | 0.110183 / 0.176557 (-0.066374) | 0.151097 / 0.737135 (-0.586039) | 0.112539 / 0.296338 (-0.183799) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.436686 / 0.215209 (0.221477) | 4.341594 / 2.077655 (2.263940) | 2.062309 / 1.504120 (0.558190) | 1.857461 / 1.541195 (0.316267) | 1.947204 / 1.468490 (0.478713) | 0.699641 / 4.584777 (-3.885136) | 3.406983 / 3.745712 (-0.338729) | 3.294705 / 5.269862 (-1.975157) | 1.360582 / 4.565676 (-3.205095) | 0.083025 / 0.424275 (-0.341250) | 0.012461 / 0.007607 (0.004854) | 0.537767 / 0.226044 (0.311722) | 5.393316 / 2.268929 (3.124387) | 2.516692 / 55.444624 (-52.927932) | 2.163987 / 6.876477 (-4.712490) | 2.220480 / 2.142072 (0.078408) | 0.810648 / 4.805227 (-3.994579) | 0.151820 / 6.500664 (-6.348844) | 0.068080 / 0.075469 (-0.007389) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.279382 / 1.841788 (-0.562405) | 13.989947 / 8.074308 (5.915638) | 14.039229 / 10.191392 (3.847836) | 0.141071 / 0.680424 (-0.539352) | 0.017118 / 0.534201 (-0.517083) | 0.381558 / 0.579283 (-0.197725) | 0.390407 / 0.434364 (-0.043957) | 0.440920 / 0.540337 (-0.099418) | 0.525478 / 1.386936 (-0.861458) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#eeedb5167d150888a640cd70ca63d6d72bbe1043 \"CML watermark\")\n" ]
2023-01-30T18:43:04Z
2023-02-01T18:15:38Z
2023-02-01T18:08:46Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5485.diff", "html_url": "https://github.com/huggingface/datasets/pull/5485", "merged_at": "2023-02-01T18:08:46Z", "patch_url": "https://github.com/huggingface/datasets/pull/5485.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5485" }
Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new doc introduced in: - #5410
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5485/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5485/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/626/comments
https://api.github.com/repos/huggingface/datasets/issues/626/events
https://github.com/huggingface/datasets/pull/626
701,352,605
MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1
626
Update GLUE URLs (now hosted on FB)
{ "avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4", "events_url": "https://api.github.com/users/jeswan/events{/privacy}", "followers_url": "https://api.github.com/users/jeswan/followers", "following_url": "https://api.github.com/users/jeswan/following{/other_user}", "gists_url": "https://api.github.com/users/jeswan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jeswan", "id": 57466294, "login": "jeswan", "node_id": "MDQ6VXNlcjU3NDY2Mjk0", "organizations_url": "https://api.github.com/users/jeswan/orgs", "received_events_url": "https://api.github.com/users/jeswan/received_events", "repos_url": "https://api.github.com/users/jeswan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jeswan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jeswan/subscriptions", "type": "User", "url": "https://api.github.com/users/jeswan" }
[]
closed
false
null
[]
null
[]
2020-09-14T19:05:39Z
2020-09-16T06:53:18Z
2020-09-16T06:53:18Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/626.diff", "html_url": "https://github.com/huggingface/datasets/pull/626", "merged_at": "2020-09-16T06:53:18Z", "patch_url": "https://github.com/huggingface/datasets/pull/626.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/626" }
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/datasets
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/626/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/626/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5466
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5466/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5466/comments
https://api.github.com/repos/huggingface/datasets/issues/5466/events
https://github.com/huggingface/datasets/pull/5466
1,557,584,845
PR_kwDODunzps5Ij-z1
5,466
remove pathlib.Path with URIs
{ "avatar_url": "https://avatars.githubusercontent.com/u/121845112?v=4", "events_url": "https://api.github.com/users/jonny-cyberhaven/events{/privacy}", "followers_url": "https://api.github.com/users/jonny-cyberhaven/followers", "following_url": "https://api.github.com/users/jonny-cyberhaven/following{/other_user}", "gists_url": "https://api.github.com/users/jonny-cyberhaven/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jonny-cyberhaven", "id": 121845112, "login": "jonny-cyberhaven", "node_id": "U_kgDOB0M1eA", "organizations_url": "https://api.github.com/users/jonny-cyberhaven/orgs", "received_events_url": "https://api.github.com/users/jonny-cyberhaven/received_events", "repos_url": "https://api.github.com/users/jonny-cyberhaven/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jonny-cyberhaven/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jonny-cyberhaven/subscriptions", "type": "User", "url": "https://api.github.com/users/jonny-cyberhaven" }
[]
closed
false
null
[]
null
[ "Thanks !\r\n`os.path.join` will use a backslash `\\` on windows which will also fail. You can use this instead in `load_from_disk`:\r\n```python\r\nfrom .filesystems import is_remote_filesystem\r\n\r\nis_local = not is_remote_filesystem(fs)\r\npath_join = os.path.join if is_local else posixpath.join\r\n```", "Thank you ! I did a minor change to not have to define a new function and I ran the CI. If it's green we can merge :)", "_The documentation is not available anymore as the PR was closed or merged._", "> \r\n\r\n\r\n\r\n> Thank you ! I did a minor change to not have to define a new function and I ran the CI. If it's green we can merge :)\r\n\r\nlol it's a battle of +1 imports or +1 functions. LGTM, I was editing fast and swapped which branch gets os vs Path. Should be ok now 🤙", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.012043 / 0.011353 (0.000690) | 0.006585 / 0.011008 (-0.004423) | 0.149007 / 0.038508 (0.110499) | 0.039514 / 0.023109 (0.016405) | 0.403893 / 0.275898 (0.127995) | 0.431252 / 0.323480 (0.107772) | 0.009218 / 0.007986 (0.001233) | 0.006108 / 0.004328 (0.001779) | 0.114666 / 0.004250 (0.110416) | 0.044962 / 0.037052 (0.007910) | 0.411592 / 0.258489 (0.153103) | 0.461561 / 0.293841 (0.167721) | 0.059958 / 0.128546 (-0.068589) | 0.029047 / 0.075646 (-0.046599) | 0.456000 / 0.419271 (0.036728) | 0.060744 / 0.043533 (0.017211) | 0.415816 / 0.255139 (0.160677) | 0.430488 / 0.283200 (0.147289) | 0.122477 / 0.141683 (-0.019205) | 1.862910 / 1.452155 (0.410755) | 1.974698 / 1.492716 (0.481981) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.257230 / 0.018006 (0.239224) | 0.606854 / 0.000490 (0.606364) | 0.006175 / 0.000200 (0.005975) | 0.000099 / 0.000054 (0.000044) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.030533 / 0.037411 (-0.006879) | 0.130702 / 0.014526 (0.116177) | 0.143781 / 0.176557 (-0.032775) | 0.183272 / 0.737135 (-0.553863) | 0.151267 / 0.296338 (-0.145071) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.637422 / 0.215209 (0.422213) | 6.503535 / 2.077655 (4.425880) | 2.630387 / 1.504120 (1.126267) | 2.281180 / 1.541195 (0.739985) | 2.354341 / 1.468490 (0.885851) | 1.306497 / 4.584777 (-3.278280) | 5.837184 / 3.745712 (2.091472) | 3.257198 / 5.269862 (-2.012663) | 2.050681 / 4.565676 (-2.514995) | 0.146415 / 0.424275 (-0.277860) | 0.015386 / 0.007607 (0.007779) | 0.790146 / 0.226044 (0.564102) | 8.056137 / 2.268929 (5.787209) | 3.383566 / 55.444624 (-52.061059) | 2.707620 / 6.876477 (-4.168856) | 2.714857 / 2.142072 (0.572785) | 1.520847 / 4.805227 (-3.284380) | 0.266028 / 6.500664 (-6.234636) | 0.091422 / 0.075469 (0.015953) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.656148 / 1.841788 (-0.185640) | 18.833393 / 8.074308 (10.759085) | 21.360824 / 10.191392 (11.169432) | 0.227608 / 0.680424 (-0.452816) | 0.049018 / 0.534201 (-0.485183) | 0.593418 / 0.579283 (0.014135) | 0.656690 / 0.434364 (0.222326) | 0.709171 / 0.540337 (0.168833) | 0.828226 / 1.386936 (-0.558710) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.010112 / 0.011353 (-0.001241) | 0.006761 / 0.011008 (-0.004247) | 0.146723 / 0.038508 (0.108215) | 0.038451 / 0.023109 (0.015342) | 0.524267 / 0.275898 (0.248369) | 0.609484 / 0.323480 (0.286004) | 0.008502 / 0.007986 (0.000516) | 0.006964 / 0.004328 (0.002635) | 0.111396 / 0.004250 (0.107146) | 0.056839 / 0.037052 (0.019787) | 0.514649 / 0.258489 (0.256160) | 0.604212 / 0.293841 (0.310372) | 0.061410 / 0.128546 (-0.067137) | 0.020396 / 0.075646 (-0.055250) | 0.505026 / 0.419271 (0.085754) | 0.067280 / 0.043533 (0.023747) | 0.522249 / 0.255139 (0.267110) | 0.559484 / 0.283200 (0.276284) | 0.120943 / 0.141683 (-0.020740) | 2.124323 / 1.452155 (0.672169) | 2.153397 / 1.492716 (0.660681) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.216614 / 0.018006 (0.198608) | 0.594181 / 0.000490 (0.593692) | 0.004079 / 0.000200 (0.003879) | 0.000117 / 0.000054 (0.000063) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.036925 / 0.037411 (-0.000486) | 0.131322 / 0.014526 (0.116797) | 0.148542 / 0.176557 (-0.028015) | 0.196045 / 0.737135 (-0.541090) | 0.156867 / 0.296338 (-0.139472) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.669722 / 0.215209 (0.454513) | 6.858856 / 2.077655 (4.781202) | 3.093969 / 1.504120 (1.589849) | 2.667385 / 1.541195 (1.126190) | 2.797192 / 1.468490 (1.328702) | 1.334759 / 4.584777 (-3.250018) | 6.024861 / 3.745712 (2.279149) | 3.257779 / 5.269862 (-2.012083) | 2.202816 / 4.565676 (-2.362860) | 0.147617 / 0.424275 (-0.276658) | 0.015451 / 0.007607 (0.007844) | 0.887015 / 0.226044 (0.660970) | 8.371288 / 2.268929 (6.102360) | 3.807451 / 55.444624 (-51.637173) | 3.079483 / 6.876477 (-3.796994) | 3.103321 / 2.142072 (0.961249) | 1.520272 / 4.805227 (-3.284955) | 0.273079 / 6.500664 (-6.227585) | 0.088613 / 0.075469 (0.013143) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.818913 / 1.841788 (-0.022875) | 19.274269 / 8.074308 (11.199960) | 19.871784 / 10.191392 (9.680392) | 0.250388 / 0.680424 (-0.430036) | 0.030562 / 0.534201 (-0.503638) | 0.560566 / 0.579283 (-0.018717) | 0.664701 / 0.434364 (0.230337) | 0.714513 / 0.540337 (0.174176) | 0.827227 / 1.386936 (-0.559710) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#f7a9bf823ea41b85313c0392388ec68b3033ef29 \"CML watermark\")\n" ]
2023-01-26T03:25:45Z
2023-01-26T17:08:57Z
2023-01-26T16:59:11Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5466.diff", "html_url": "https://github.com/huggingface/datasets/pull/5466", "merged_at": "2023-01-26T16:59:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/5466.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5466" }
Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5466/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5466/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4924
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4924/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4924/comments
https://api.github.com/repos/huggingface/datasets/issues/4924/events
https://github.com/huggingface/datasets/issues/4924
1,358,611,513
I_kwDODunzps5Q-sQ5
4,924
Concatenate_datasets loads everything into RAM
{ "avatar_url": "https://avatars.githubusercontent.com/u/39416047?v=4", "events_url": "https://api.github.com/users/louisdeneve/events{/privacy}", "followers_url": "https://api.github.com/users/louisdeneve/followers", "following_url": "https://api.github.com/users/louisdeneve/following{/other_user}", "gists_url": "https://api.github.com/users/louisdeneve/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/louisdeneve", "id": 39416047, "login": "louisdeneve", "node_id": "MDQ6VXNlcjM5NDE2MDQ3", "organizations_url": "https://api.github.com/users/louisdeneve/orgs", "received_events_url": "https://api.github.com/users/louisdeneve/received_events", "repos_url": "https://api.github.com/users/louisdeneve/repos", "site_admin": false, "starred_url": "https://api.github.com/users/louisdeneve/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/louisdeneve/subscriptions", "type": "User", "url": "https://api.github.com/users/louisdeneve" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-09-01T10:25:17Z
2022-09-01T11:50:54Z
2022-09-01T11:50:54Z
NONE
null
null
null
## Describe the bug When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance ## Steps to reproduce the bug ```python gcs = gcsfs.GCSFileSystem(project='project') datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)] dataset = concatenate_datasets(datasets) ``` ## Expected results A concatenated dataset which is stored on my disk. ## Actual results Concatenated dataset gets loaded into RAM and overflows it which gets the process killed. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 2.4.0 - Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10 - Python version: 3.8.13 - PyArrow version: 8.0.1 - Pandas version: 1.4.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4924/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4924/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4107
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4107/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4107/comments
https://api.github.com/repos/huggingface/datasets/issues/4107/events
https://github.com/huggingface/datasets/issues/4107
1,194,484,885
I_kwDODunzps5HMmSV
4,107
Unable to view the dataset and loading the same dataset throws the error - ArrowInvalid: Exceeded maximum rows
{ "avatar_url": "https://avatars.githubusercontent.com/u/23344465?v=4", "events_url": "https://api.github.com/users/Pavithree/events{/privacy}", "followers_url": "https://api.github.com/users/Pavithree/followers", "following_url": "https://api.github.com/users/Pavithree/following{/other_user}", "gists_url": "https://api.github.com/users/Pavithree/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Pavithree", "id": 23344465, "login": "Pavithree", "node_id": "MDQ6VXNlcjIzMzQ0NDY1", "organizations_url": "https://api.github.com/users/Pavithree/orgs", "received_events_url": "https://api.github.com/users/Pavithree/received_events", "repos_url": "https://api.github.com/users/Pavithree/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Pavithree/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Pavithree/subscriptions", "type": "User", "url": "https://api.github.com/users/Pavithree" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting. I'm looking at it", " It's not related to the dataset viewer in itself. I can replicate the error with:\r\n\r\n```\r\n>>> import datasets as ds\r\n>>> d = ds.load_dataset('Pavithree/explainLikeImFive')\r\nUsing custom data configuration Pavithree--explainLikeImFive-b68b6d8112cd8a51\r\nDownloading and preparing dataset json/Pavithree--explainLikeImFive to /home/slesage/.cache/huggingface/datasets/json/Pavithree--explainLikeImFive-b68b6d8112cd8a51/0.0.0/ac0ca5f5289a6cf108e706efcf040422dbbfa8e658dee6a819f20d76bb84d26b...\r\nDownloading data: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 305M/305M [00:03<00:00, 98.6MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17.9M/17.9M [00:00<00:00, 75.7MB/s]\r\nDownloading data: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11.9M/11.9M [00:00<00:00, 70.6MB/s]\r\nDownloading data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:05<00:00, 1.92s/it]\r\nExtracting data files: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:00<00:00, 1948.42it/s]\r\nFailed to read file '/home/slesage/.cache/huggingface/datasets/downloads/5fee9c8819754df277aee6f252e4db6897d785231c21938407b8862ca871d246' with error <class 'pyarrow.lib.ArrowInvalid'>: Exceeded maximum rows\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 144, in _generate_tables\r\n dataset = json.load(f)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 293, in load\r\n return loads(fp.read(),\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/__init__.py\", line 357, in loads\r\n return _default_decoder.decode(s)\r\n File \"/home/slesage/.pyenv/versions/3.8.11/lib/python3.8/json/decoder.py\", line 340, in decode\r\n raise JSONDecodeError(\"Extra data\", s, end)\r\njson.decoder.JSONDecodeError: Extra data: line 1 column 916 (char 915)\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets/src/datasets/load.py\", line 1691, in load_dataset\r\n builder_instance.download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 605, in download_and_prepare\r\n self._download_and_prepare(\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 694, in _download_and_prepare\r\n self._prepare_split(split_generator, **prepare_split_kwargs)\r\n File \"/home/slesage/hf/datasets/src/datasets/builder.py\", line 1151, in _prepare_split\r\n for key, table in logging.tqdm(\r\n File \"/home/slesage/.pyenv/versions/datasets/lib/python3.8/site-packages/tqdm/std.py\", line 1168, in __iter__\r\n for obj in iterable:\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 146, in _generate_tables\r\n raise e\r\n File \"/home/slesage/hf/datasets/src/datasets/packaged_modules/json/json.py\", line 122, in _generate_tables\r\n pa_table = paj.read_json(\r\n File \"pyarrow/_json.pyx\", line 246, in pyarrow._json.read_json\r\n File \"pyarrow/error.pxi\", line 143, in pyarrow.lib.pyarrow_internal_check_status\r\n File \"pyarrow/error.pxi\", line 99, in pyarrow.lib.check_status\r\npyarrow.lib.ArrowInvalid: Exceeded maximum rows\r\n```\r\n\r\ncc @lhoestq @albertvillanova @mariosasko ", "It seems that train.json is not a valid JSON Lines file: it has several JSON objects in the first line (the 915th character in the first line starts a new object, and there's no \"\\n\")\r\n\r\nYou need to have one JSON object per line", "I'm closing this issue.\r\n\r\n@Pavithree, please, feel free to re-open it if fixing the JSON file does not solve it.", "Thank you! that fixes the issue." ]
2022-04-06T11:37:15Z
2022-04-08T07:13:07Z
2022-04-06T14:39:55Z
NONE
null
null
null
## Dataset viewer issue - -ArrowInvalid: Exceeded maximum rows **Link:** *https://huggingface.co/datasets/Pavithree/explainLikeImFive* *This is the subset of original eli5 dataset https://huggingface.co/datasets/vblagoje/lfqa. I just filtered the data samples which belongs to one particular subreddit thread. However, the dataset preview for train split returns the below mentioned error: Status code: 400 Exception: ArrowInvalid Message: Exceeded maximum rows When I try to load the same dataset it returns ArrowInvalid: Exceeded maximum rows error* Am I the one who added this dataset ? Yes
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4107/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4107/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/869
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/869/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/869/comments
https://api.github.com/repos/huggingface/datasets/issues/869/events
https://github.com/huggingface/datasets/pull/869
746,495,711
MDExOlB1bGxSZXF1ZXN0NTIzODc3OTkw
869
Update ner datasets infos
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ ":+1: Thanks for fixing it!" ]
2020-11-19T11:28:03Z
2020-11-19T14:14:18Z
2020-11-19T14:14:17Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/869.diff", "html_url": "https://github.com/huggingface/datasets/pull/869", "merged_at": "2020-11-19T14:14:17Z", "patch_url": "https://github.com/huggingface/datasets/pull/869.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/869" }
Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel) I also fixed the ner types of conll2003
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/869/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/869/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/269
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/269/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/269/comments
https://api.github.com/repos/huggingface/datasets/issues/269/events
https://github.com/huggingface/datasets/issues/269
638,106,774
MDU6SXNzdWU2MzgxMDY3NzQ=
269
Error in metric.compute: missing `original_instructions` argument
{ "avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4", "events_url": "https://api.github.com/users/zphang/events{/privacy}", "followers_url": "https://api.github.com/users/zphang/followers", "following_url": "https://api.github.com/users/zphang/following{/other_user}", "gists_url": "https://api.github.com/users/zphang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zphang", "id": 1668462, "login": "zphang", "node_id": "MDQ6VXNlcjE2Njg0NjI=", "organizations_url": "https://api.github.com/users/zphang/orgs", "received_events_url": "https://api.github.com/users/zphang/received_events", "repos_url": "https://api.github.com/users/zphang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zphang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zphang/subscriptions", "type": "User", "url": "https://api.github.com/users/zphang" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[]
2020-06-13T06:26:54Z
2020-06-18T07:41:44Z
2020-06-18T07:41:44Z
NONE
null
null
null
I'm running into an error using metrics for computation in the latest master as well as version 0.2.1. Here is a minimal example: ```python import nlp rte_metric = nlp.load_metric('glue', name="rte") rte_metric.compute( [0, 0, 1, 1], [0, 1, 0, 1], ) ``` ``` 181 # Read the predictions and references 182 reader = ArrowReader(path=self.data_dir, info=None) --> 183 self.data = reader.read_files(node_files) 184 185 # Release all of our locks TypeError: read_files() missing 1 required positional argument: 'original_instructions' ``` I believe this might have been introduced with cc8d2508b75f7ba0e5438d0686ee02dcec43c7f4, which added the `original_instructions` argument. Elsewhere, an empty-string default is provided--perhaps that could be done here too?
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/269/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/269/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4548
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4548/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4548/comments
https://api.github.com/repos/huggingface/datasets/issues/4548/events
https://github.com/huggingface/datasets/issues/4548
1,282,218,096
I_kwDODunzps5MbRhw
4,548
Metadata.jsonl for Imagefolder is ignored if it's in a parent directory to the splits directories/do not have "{split}_" prefix
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "I agree it would be nice to support this. It doesn't fit really well in the current data_files.py, where files of each splits are separated in different folder though, maybe we have to modify a bit the logic here. \r\n\r\nOne idea would be to extend `get_patterns_in_dataset_repository` and `get_patterns_locally` to additionally check for `metadata.json`, but feel free to comment if you have better ideas (I feel like we're reaching the limits of what the current implementation IMO, so we could think of a different way of resolving the data files if necessary)" ]
2022-06-23T10:58:57Z
2022-06-30T10:15:32Z
2022-06-30T10:15:32Z
CONTRIBUTOR
null
null
null
If data contains a single `metadata.jsonl` file for several splits, it won't be included in a dataset's `data_files` and therefore ignored. This happens when a directory is structured like as follows: ``` train/ file_1.jpg file_2.jpg test/ file_3.jpg file_4.jpg metadata.jsonl ``` or like as follows: ``` train_file_1.jpg train_file_2.jpg test_file_3.jpg test_file_4.jpg metadata.jsonl ``` The same for HF repos. because it's ignored by the patterns [here](https://github.com/huggingface/datasets/blob/master/src/datasets/data_files.py#L29) @lhoestq @mariosasko Do you think it's better to add this functionality in `data_files.py` or just specifically in imagefolder/audiofolder code? In `data_files.py` would me more general but I don't know if there are any other cases when that might be needed.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4548/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4548/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1384
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1384/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1384/comments
https://api.github.com/repos/huggingface/datasets/issues/1384/events
https://github.com/huggingface/datasets/pull/1384
760,331,767
MDExOlB1bGxSZXF1ZXN0NTM1MTgxMjg1
1,384
Add News Commentary Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "events_url": "https://api.github.com/users/abhishekkrthakur/events{/privacy}", "followers_url": "https://api.github.com/users/abhishekkrthakur/followers", "following_url": "https://api.github.com/users/abhishekkrthakur/following{/other_user}", "gists_url": "https://api.github.com/users/abhishekkrthakur/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abhishekkrthakur", "id": 1183441, "login": "abhishekkrthakur", "node_id": "MDQ6VXNlcjExODM0NDE=", "organizations_url": "https://api.github.com/users/abhishekkrthakur/orgs", "received_events_url": "https://api.github.com/users/abhishekkrthakur/received_events", "repos_url": "https://api.github.com/users/abhishekkrthakur/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abhishekkrthakur/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abhishekkrthakur/subscriptions", "type": "User", "url": "https://api.github.com/users/abhishekkrthakur" }
[]
closed
false
null
[]
null
[]
2020-12-09T13:30:36Z
2020-12-10T16:54:08Z
2020-12-10T16:54:07Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1384.diff", "html_url": "https://github.com/huggingface/datasets/pull/1384", "merged_at": "2020-12-10T16:54:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/1384.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1384" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1384/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1384/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/463
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/463/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/463/comments
https://api.github.com/repos/huggingface/datasets/issues/463/events
https://github.com/huggingface/datasets/pull/463
669,735,455
MDExOlB1bGxSZXF1ZXN0NDYwMDcyNjQ1
463
Add dataset/mlsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/36986299?v=4", "events_url": "https://api.github.com/users/RachelKer/events{/privacy}", "followers_url": "https://api.github.com/users/RachelKer/followers", "following_url": "https://api.github.com/users/RachelKer/following{/other_user}", "gists_url": "https://api.github.com/users/RachelKer/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/RachelKer", "id": 36986299, "login": "RachelKer", "node_id": "MDQ6VXNlcjM2OTg2Mjk5", "organizations_url": "https://api.github.com/users/RachelKer/orgs", "received_events_url": "https://api.github.com/users/RachelKer/received_events", "repos_url": "https://api.github.com/users/RachelKer/repos", "site_admin": false, "starred_url": "https://api.github.com/users/RachelKer/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/RachelKer/subscriptions", "type": "User", "url": "https://api.github.com/users/RachelKer" }
[]
closed
false
null
[]
null
[ "I think the problem is related to `wiki_dpr` dataset which is making the circle CI failed as you can see:\r\n```\r\nFAILED tests/test_dataset_common.py::AWSDatasetTest::test_load_dataset_wiki_dpr\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_no_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/dummy_psgs_w100_with_nq_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_no_embeddings\r\nFAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_wiki_dpr/psgs_w100_with_nq_embeddings\r\n\r\n```\r\nI'm facing the same issues with my last commits, I tried to rebase from master but it still not working. Maybe @lhoestq can help with.", "Hello, I am confused about the next steps I need to do. Did the forced merge solve the issue ?", "Hello :)\r\nI think you can just rebase from master and it should solve the CI error" ]
2020-07-31T11:50:52Z
2020-08-24T14:54:42Z
2020-08-24T14:54:42Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/463.diff", "html_url": "https://github.com/huggingface/datasets/pull/463", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/463.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/463" }
New pull request that should correct the previous errors. The load_real_data stills fails because it is looking for a default dataset URL that does not exists, this does not happen when loading the dataset with load_dataset
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/463/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/463/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2459
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2459/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2459/comments
https://api.github.com/repos/huggingface/datasets/issues/2459/events
https://github.com/huggingface/datasets/issues/2459
915,222,015
MDU6SXNzdWU5MTUyMjIwMTU=
2,459
`Proto_qa` hosting seems to be broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "@VictorSanh , I think @mariosasko is already working on it. " ]
2021-06-08T16:16:32Z
2021-06-10T08:31:09Z
2021-06-10T08:31:09Z
MEMBER
null
null
null
## Describe the bug The hosting (on Github) of the `proto_qa` dataset seems broken. I haven't investigated more yet, just flagging it for now. @zaidalyafeai if you want to dive into it, I think it's just a matter of changing the links in `proto_qa.py` ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("proto_qa") ``` ## Actual results ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/load.py", line 751, in load_dataset use_auth_token=use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 575, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/builder.py", line 630, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/hf/.cache/huggingface/modules/datasets_modules/datasets/proto_qa/445346efaad5c5f200ecda4aa7f0fb50ff1b55edde3003be424a2112c3e8102e/proto_qa.py", line 131, in _split_generators train_fpath = dl_manager.download(_URLs[self.config.name]["train"]) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 199, in download num_proc=download_config.num_proc, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 218, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 291, in cached_path use_auth_token=download_config.use_auth_token, File "/home/hf/dev/promptsource/.venv/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/iesl/protoqa-data/master/data/train/protoqa_train.jsonl ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2459/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2459/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3306
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3306/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3306/comments
https://api.github.com/repos/huggingface/datasets/issues/3306/events
https://github.com/huggingface/datasets/issues/3306
1,059,185,860
I_kwDODunzps4_IeTE
3,306
nested sequence feature won't encode example if the first item of the outside sequence is an empty list
{ "avatar_url": "https://avatars.githubusercontent.com/u/38486514?v=4", "events_url": "https://api.github.com/users/function2-llx/events{/privacy}", "followers_url": "https://api.github.com/users/function2-llx/followers", "following_url": "https://api.github.com/users/function2-llx/following{/other_user}", "gists_url": "https://api.github.com/users/function2-llx/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/function2-llx", "id": 38486514, "login": "function2-llx", "node_id": "MDQ6VXNlcjM4NDg2NTE0", "organizations_url": "https://api.github.com/users/function2-llx/orgs", "received_events_url": "https://api.github.com/users/function2-llx/received_events", "repos_url": "https://api.github.com/users/function2-llx/repos", "site_admin": false, "starred_url": "https://api.github.com/users/function2-llx/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/function2-llx/subscriptions", "type": "User", "url": "https://api.github.com/users/function2-llx" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" } ]
null
[ "knock knock", "Hi, thanks for reporting! I've linked a PR that should fix the issue.", "I've checked the PR and it looks great, thanks a lot!" ]
2021-11-20T16:57:54Z
2021-12-08T13:02:15Z
2021-12-08T13:02:15Z
NONE
null
null
null
## Describe the bug As the title, nested sequence feature won't encode example if the first item of the outside sequence is an empty list. ## Steps to reproduce the bug ```python from datasets import Features, Sequence, ClassLabel features = Features({ 'x': Sequence(Sequence(ClassLabel(names=['a', 'b']))), }) print(features.encode_batch({ 'x': [ [['a'], ['b']], [[], ['b']], ] })) ``` ## Expected results print `{'x': [[[0], [1]], [[], ['1']]]}` ## Actual results print `{'x': [[[0], [1]], [[], ['b']]]}` ## Environment info - `datasets` version: 1.15.1 - Platform: Linux-5.13.0-21-generic-x86_64-with-glibc2.34 - Python version: 3.9.7 - PyArrow version: 6.0.0 ## Additional information I think the issue stems from [here](https://github.com/huggingface/datasets/blob/8555197a3fe826e98bd0206c2d031c4488c53c5c/src/datasets/features/features.py#L847-L848).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3306/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3306/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3381
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3381/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3381/comments
https://api.github.com/repos/huggingface/datasets/issues/3381/events
https://github.com/huggingface/datasets/issues/3381
1,071,283,879
I_kwDODunzps4_2n6n
3,381
Unable to load audio_features from common_voice dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/8268102?v=4", "events_url": "https://api.github.com/users/ashu5644/events{/privacy}", "followers_url": "https://api.github.com/users/ashu5644/followers", "following_url": "https://api.github.com/users/ashu5644/following{/other_user}", "gists_url": "https://api.github.com/users/ashu5644/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ashu5644", "id": 8268102, "login": "ashu5644", "node_id": "MDQ6VXNlcjgyNjgxMDI=", "organizations_url": "https://api.github.com/users/ashu5644/orgs", "received_events_url": "https://api.github.com/users/ashu5644/received_events", "repos_url": "https://api.github.com/users/ashu5644/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ashu5644/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ashu5644/subscriptions", "type": "User", "url": "https://api.github.com/users/ashu5644" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi ! Feel free to access `batch[\"audio\"][\"array\"]` and `batch[\"audio\"][\"sampling_rate\"]` instead\r\n\r\n`datasets` 1.16 introduced some changes in `common_voice` and now the `path` field is no longer a path to a local file (but rather the path to the file in the archive it's extracted from)", "Thanks for the information. It works.", "Cool ! Closing this issue then" ]
2021-12-04T19:59:11Z
2021-12-06T17:52:42Z
2021-12-06T17:52:42Z
NONE
null
null
null
## Describe the bug I am not able to load audio features from common_voice dataset ## Steps to reproduce the bug ``` from datasets import load_dataset import torchaudio test_dataset = load_dataset("common_voice", "hi", split="test[:2%]") resampler = torchaudio.transforms.Resample(48_000, 16_000) def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) ``` ## Expected results This piece of code should return test_dataset after loading audio features. ## Actual results Reusing dataset common_voice (/home/jovyan/.cache/huggingface/datasets/common_voice/hi/6.1.0/b879a355caa529b11f2249400b61cadd0d9433f334d5c60f8c7216ccedfecfe1) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py:341: UserWarning: Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 Transformers. Using `model.gradient_checkpointing_enable()` instead, or if you are using the `Trainer` API, pass `gradient_checkpointing=True` in your `TrainingArguments`. "Passing `gradient_checkpointing` to a config initialization is deprecated and will be removed in v5 " Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. 0%| | 0/3 [00:00<?, ?ex/s]formats: can't open input file `common_voice_hi_23795358.mp3': No such file or directory 0%| | 0/3 [00:00<?, ?ex/s] Traceback (most recent call last): File "demo_file.py", line 23, in <module> test_dataset = test_dataset.map(speech_file_to_array_fn) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2036, in map desc=desc, File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 518, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 485, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 411, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2368, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2277, in apply_function_on_filtered_inputs processed_inputs = function(*fn_args, *additional_args, **fn_kwargs) File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1978, in decorated result = f(decorated_item, *args, **kwargs) File "demo_file.py", line 19, in speech_file_to_array_fn speech_array, sampling_rate = torchaudio.load(batch["path"]) File "/opt/conda/lib/python3.7/site-packages/torchaudio/backend/sox_io_backend.py", line 154, in load filepath, frame_offset, num_frames, normalize, channels_first, format) RuntimeError: Error loading audio file: failed to open file common_voice_hi_23795358.mp3 ## Environment info - `datasets` version: 1.16.1 - Platform: Linux-4.14.243 with-debian-bullseye-sid - Python version: 3.7.9 - PyArrow version: 6.0.1
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3381/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3381/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1500
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1500/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1500/comments
https://api.github.com/repos/huggingface/datasets/issues/1500/events
https://github.com/huggingface/datasets/pull/1500
763,479,305
MDExOlB1bGxSZXF1ZXN0NTM3OTM0OTI1
1,500
adding polsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/15803781?v=4", "events_url": "https://api.github.com/users/kldarek/events{/privacy}", "followers_url": "https://api.github.com/users/kldarek/followers", "following_url": "https://api.github.com/users/kldarek/following{/other_user}", "gists_url": "https://api.github.com/users/kldarek/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kldarek", "id": 15803781, "login": "kldarek", "node_id": "MDQ6VXNlcjE1ODAzNzgx", "organizations_url": "https://api.github.com/users/kldarek/orgs", "received_events_url": "https://api.github.com/users/kldarek/received_events", "repos_url": "https://api.github.com/users/kldarek/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kldarek/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kldarek/subscriptions", "type": "User", "url": "https://api.github.com/users/kldarek" }
[]
closed
false
null
[]
null
[ "@lhoestq thanks for the comments! Should be fixed in the latest commit, I assume the CI errors are unrelated." ]
2020-12-12T09:05:29Z
2020-12-18T09:43:43Z
2020-12-18T09:43:43Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1500.diff", "html_url": "https://github.com/huggingface/datasets/pull/1500", "merged_at": "2020-12-18T09:43:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/1500.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1500" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1500/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1500/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/693
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/693/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/693/comments
https://api.github.com/repos/huggingface/datasets/issues/693/events
https://github.com/huggingface/datasets/pull/693
712,822,200
MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw
693
Rachel ker add dataset/mlsum
{ "avatar_url": "https://avatars.githubusercontent.com/u/32742136?v=4", "events_url": "https://api.github.com/users/pdhg/events{/privacy}", "followers_url": "https://api.github.com/users/pdhg/followers", "following_url": "https://api.github.com/users/pdhg/following{/other_user}", "gists_url": "https://api.github.com/users/pdhg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pdhg", "id": 32742136, "login": "pdhg", "node_id": "MDQ6VXNlcjMyNzQyMTM2", "organizations_url": "https://api.github.com/users/pdhg/orgs", "received_events_url": "https://api.github.com/users/pdhg/received_events", "repos_url": "https://api.github.com/users/pdhg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pdhg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pdhg/subscriptions", "type": "User", "url": "https://api.github.com/users/pdhg" }
[]
closed
false
null
[]
null
[ "It looks like an outdated PR (we've already added mlsum). Closing it" ]
2020-10-01T13:01:10Z
2023-09-24T09:48:23Z
2020-10-01T17:01:13Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/693.diff", "html_url": "https://github.com/huggingface/datasets/pull/693", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/693.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/693" }
.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/693/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/693/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4428
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4428/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4428/comments
https://api.github.com/repos/huggingface/datasets/issues/4428/events
https://github.com/huggingface/datasets/issues/4428
1,254,092,818
I_kwDODunzps5Kv_AS
4,428
Errors when building dummy data if you use nested _URLS
{ "avatar_url": "https://avatars.githubusercontent.com/u/2529049?v=4", "events_url": "https://api.github.com/users/silverriver/events{/privacy}", "followers_url": "https://api.github.com/users/silverriver/followers", "following_url": "https://api.github.com/users/silverriver/following{/other_user}", "gists_url": "https://api.github.com/users/silverriver/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/silverriver", "id": 2529049, "login": "silverriver", "node_id": "MDQ6VXNlcjI1MjkwNDk=", "organizations_url": "https://api.github.com/users/silverriver/orgs", "received_events_url": "https://api.github.com/users/silverriver/received_events", "repos_url": "https://api.github.com/users/silverriver/repos", "site_admin": false, "starred_url": "https://api.github.com/users/silverriver/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/silverriver/subscriptions", "type": "User", "url": "https://api.github.com/users/silverriver" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[]
2022-05-31T16:10:57Z
2022-06-07T09:24:09Z
2022-06-07T09:24:09Z
CONTRIBUTOR
null
null
null
## Describe the bug When making dummy data with the `datasets-cli dummy_data` tool, an error will be raised if you use a nested _URLS in your dataset script. Traceback (most recent call last): File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 43, in <module> main() File "/home/name/LCCC/datasets/src/datasets/commands/datasets_cli.py", line 39, in main service.run() File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 311, in run self._autogenerate_dummy_data( File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 337, in _autogenerate_dummy_data dataset_builder._split_generators(dl_manager) File "/home/name/.cache/huggingface/modules/datasets_modules/datasets/personal_dialog/559332bced5eeafa7f7efc2a7c10ce02cee2a8116bbab4611c35a50ba2715b77/personal_dialog.py", line 108, in _split_generators data_dir = dl_manager.download_and_extract(urls) File "/home/name/LCCC/datasets/src/datasets/commands/dummy_data.py", line 56, in download_and_extract dummy_output = self.mock_download_manager.download(url_or_urls) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 130, in download return self.download_and_extract(data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 122, in download_and_extract return self.create_dummy_data_dict(dummy_file, data_url) File "/home/name/LCCC/datasets/src/datasets/download/mock_download_manager.py", line 165, in create_dummy_data_dict if isinstance(first_value, str) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): TypeError: unhashable type: 'list' ## Steps to reproduce the bug You can use my dataset script implemented here: https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py ```python datasets_cli dummy_data datasets/personal_dialog --auto_generate ``` You can change https://github.com/silverriver/datasets/blob/2ecd36760c40b8e29b1137cd19b5bad0e19c76fd/datasets/personal_dialog/personal_dialog.py#L54 to ``` "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz" ``` before runing the above script to avoid downloading a large training data. ## Expected results The dummy data should be generated ## Actual results An error is raised. It seems that in https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 We only check if the first item of dummy_data_dict.values() is str. However, dummy_data_dict.values() may have the type of [str, list, list]. A simple fix would be changing https://github.com/huggingface/datasets/blob/12540dd75015678ec6019f258d811ee107439a73/src/datasets/download/mock_download_manager.py#L165 to ```python if all([isinstance(value, str) for value in dummy_data_dict.values()]) and len(set(dummy_data_dict.values())) < len(dummy_data_dict.values()): ``` But I don't know if this kinds of change may bring any side effect since I am not sure about the detail logic here. ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: - Platform: Linux - Python version: Python 3.9.10 - PyArrow version: 7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4428/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4428/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3525
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3525/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3525/comments
https://api.github.com/repos/huggingface/datasets/issues/3525/events
https://github.com/huggingface/datasets/pull/3525
1,093,831,268
PR_kwDODunzps4wiL8p
3,525
Adding license information for Openbookcorpus
{ "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/90473723?v=4", "events_url": "https://api.github.com/users/meg-huggingface/events{/privacy}", "followers_url": "https://api.github.com/users/meg-huggingface/followers", "following_url": "https://api.github.com/users/meg-huggingface/following{/other_user}", "gists_url": "https://api.github.com/users/meg-huggingface/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/meg-huggingface", "id": 90473723, "login": "meg-huggingface", "node_id": "MDQ6VXNlcjkwNDczNzIz", "organizations_url": "https://api.github.com/users/meg-huggingface/orgs", "received_events_url": "https://api.github.com/users/meg-huggingface/received_events", "repos_url": "https://api.github.com/users/meg-huggingface/repos", "site_admin": false, "starred_url": "https://api.github.com/users/meg-huggingface/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/meg-huggingface/subscriptions", "type": "User", "url": "https://api.github.com/users/meg-huggingface" } ]
null
[ "The MIT license seems to be for the crawling code, no ? Then maybe we can also redirect users to the [terms of smashwords.com](https://www.smashwords.com/about/tos) regarding copyrights, in particular the paragraph 10 for end-users. In particular it seems that end users can download and use the content \"for their personal enjoyment in any reasonable non-commercial manner in compliance with copyright law\" and the smashwords end-users agreement.\r\n\r\nIt should be the same for https://github.com/huggingface/datasets/pull/3526 as well", "May I merge this one ?", "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-01-04T23:20:36Z
2022-04-20T09:54:30Z
2022-04-20T09:48:10Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3525.diff", "html_url": "https://github.com/huggingface/datasets/pull/3525", "merged_at": "2022-04-20T09:48:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/3525.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3525" }
Not entirely sure, following the links here, but it seems the relevant license is at https://github.com/soskek/bookcorpus/blob/master/LICENSE
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3525/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3525/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6481
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6481/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6481/comments
https://api.github.com/repos/huggingface/datasets/issues/6481/events
https://github.com/huggingface/datasets/issues/6481
2,032,650,003
I_kwDODunzps55J8cT
6,481
using torchrun, save_to_disk suddenly shows SIGTERM
{ "avatar_url": "https://avatars.githubusercontent.com/u/85916625?v=4", "events_url": "https://api.github.com/users/Ariya12138/events{/privacy}", "followers_url": "https://api.github.com/users/Ariya12138/followers", "following_url": "https://api.github.com/users/Ariya12138/following{/other_user}", "gists_url": "https://api.github.com/users/Ariya12138/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Ariya12138", "id": 85916625, "login": "Ariya12138", "node_id": "MDQ6VXNlcjg1OTE2NjI1", "organizations_url": "https://api.github.com/users/Ariya12138/orgs", "received_events_url": "https://api.github.com/users/Ariya12138/received_events", "repos_url": "https://api.github.com/users/Ariya12138/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Ariya12138/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Ariya12138/subscriptions", "type": "User", "url": "https://api.github.com/users/Ariya12138" }
[]
open
false
null
[]
null
[]
2023-12-08T13:22:03Z
2023-12-08T13:22:03Z
null
NONE
null
null
null
### Describe the bug When I run my code using the "torchrun" command, when the code reaches the "save_to_disk" part, suddenly I get the following warning and error messages: Because the dataset is too large, the "save_to_disk" function splits it into 70 parts for saving. However, an error occurs suddenly when it reaches the 14th shard. WARNING: torch.distributed.elastic.multiprocessing.api: Sending process 2224968 closing signal SIGTERM ERROR: torch.distributed.elastic.multiprocessing.api: failed (exitcode: -7). traceback: Signal 7 (SIGBUS) received by PID 2224967. ### Steps to reproduce the bug ds_shard = ds_shard.map(map_fn, *args, **kwargs) ds_shard.save_to_disk(ds_shard_filepaths[rank]) Saving the dataset (14/70 shards): 20%|██ | 875350/4376702 [00:19<01:53, 30863.15 examples/s] WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2224968 closing signal SIGTERM ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -7) local_rank: 0 (pid: 2224967) of binary: /home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/python Traceback (most recent call last): File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/bin/torchrun", line 8, in <module> sys.exit(main()) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 346, in wrapper return f(*args, **kwargs) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 794, in main run(args) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 134, in __call__ return launch_agent(self._config, self._entrypoint, list(args)) File "/home/bingxing2/home/scx6964/.conda/envs/ariya235/lib/python3.10/site-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError: ========================================================== run.py FAILED ---------------------------------------------------------- Failures: <NO_OTHER_FAILURES> ---------------------------------------------------------- Root Cause (first observed failure): [0]: time : 2023-12-08_20:09:04 rank : 0 (local_rank: 0) exitcode : -7 (pid: 2224967) error_file: <N/A> traceback : Signal 7 (SIGBUS) received by PID 2224967 ### Expected behavior I hope it can save successfully without any issues, but it seems there is a problem. ### Environment info `datasets` version: 2.14.6 - Platform: Linux-4.19.90-24.4.v2101.ky10.aarch64-aarch64-with-glibc2.28 - Python version: 3.10.11 - Huggingface_hub version: 0.17.3 - PyArrow version: 14.0.0 - Pandas version: 2.1.2
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6481/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6481/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/39
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/39/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/39/comments
https://api.github.com/repos/huggingface/datasets/issues/39/events
https://github.com/huggingface/datasets/pull/39
611,712,135
MDExOlB1bGxSZXF1ZXN0NDEyODIxNTA4
39
[Test] improve slow testing
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[]
2020-05-04T08:58:33Z
2020-05-04T08:59:50Z
2020-05-04T08:59:49Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/39.diff", "html_url": "https://github.com/huggingface/datasets/pull/39", "merged_at": "2020-05-04T08:59:49Z", "patch_url": "https://github.com/huggingface/datasets/pull/39.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/39" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/39/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/39/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1898
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1898/comments
https://api.github.com/repos/huggingface/datasets/issues/1898/events
https://github.com/huggingface/datasets/issues/1898
810,157,251
MDU6SXNzdWU4MTAxNTcyNTE=
1,898
ALT dataset has repeating instances in all splits
{ "avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4", "events_url": "https://api.github.com/users/10-zin/events{/privacy}", "followers_url": "https://api.github.com/users/10-zin/followers", "following_url": "https://api.github.com/users/10-zin/following{/other_user}", "gists_url": "https://api.github.com/users/10-zin/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/10-zin", "id": 33179372, "login": "10-zin", "node_id": "MDQ6VXNlcjMzMTc5Mzcy", "organizations_url": "https://api.github.com/users/10-zin/orgs", "received_events_url": "https://api.github.com/users/10-zin/received_events", "repos_url": "https://api.github.com/users/10-zin/repos", "site_admin": false, "starred_url": "https://api.github.com/users/10-zin/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/10-zin/subscriptions", "type": "User", "url": "https://api.github.com/users/10-zin" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "Thanks for reporting. This looks like a very bad issue. I'm looking into it", "I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch", "Thanks!!! works perfectly in the bleading edge master version", "Closed by #1899" ]
2021-02-17T12:51:42Z
2021-02-19T06:18:46Z
2021-02-19T06:18:46Z
NONE
null
null
null
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1898/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1898/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4633/comments
https://api.github.com/repos/huggingface/datasets/issues/4633/events
https://github.com/huggingface/datasets/pull/4633
1,294,367,783
PR_kwDODunzps462_qX
4,633
[data_files] Only match separated split names
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "I ran a script to find affected datasets (just did it on non-private non-gated). Adding \"testing\" and \"evaluation\" fixes all of of them except one:\r\n- projecte-aina/cat_manynames:\thuman_annotated_testset.tsv\r\n\r\nLet me open a PR on their repository to fix it\r\nEDIT: pr [here](https://huggingface.co/datasets/projecte-aina/cat_manynames/discussions/2)", "Feel free to merge @albertvillanova if it's all good to you :)", "Thanks for the feedback @albertvillanova I took your comments into account :)\r\n- added numbers as supported delimiters\r\n- used list comprehension to create the patterns list\r\n- updated the docs and the tests according to your comments\r\n\r\nLet me know what you think !", "I ended up removing the patching and the context manager :) merging" ]
2022-07-05T14:18:11Z
2022-07-18T13:20:29Z
2022-07-18T13:07:33Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4633.diff", "html_url": "https://github.com/huggingface/datasets/pull/4633", "merged_at": "2022-07-18T13:07:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/4633.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4633" }
As reported in https://github.com/huggingface/datasets/issues/4477, the current pattern matching to infer which file goes into which split is too permissive. For example a file "contest.py" would be considered part of a test split (it contains "test") and "seqeval.py" as well (it contains "eval"). In this PR I made the pattern matching more robust by only matching split names **between separators**. The supported separators are dots, dashes, spaces and underscores. I updated the docs accordingly. One detail about the tests: I had to update one test because it was using `PurePath.match` as a reference for globbing, but it doesn't support the `[..]` glob pattern. Therefore I added a `mock_fs` context manager that can be used to easily define a dummy filesystem with certain files in it and run pattern matching tests. Its code comes mostly from test_streaming_download_manager.py Close https://github.com/huggingface/datasets/issues/4477
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4633/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4633/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6477
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6477/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6477/comments
https://api.github.com/repos/huggingface/datasets/issues/6477/events
https://github.com/huggingface/datasets/pull/6477
2,028,022,374
PR_kwDODunzps5hRq_N
6,477
Fix PermissionError on Windows CI
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6477). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005383 / 0.011353 (-0.005969) | 0.003644 / 0.011008 (-0.007364) | 0.063375 / 0.038508 (0.024866) | 0.055567 / 0.023109 (0.032457) | 0.261376 / 0.275898 (-0.014522) | 0.283731 / 0.323480 (-0.039749) | 0.004022 / 0.007986 (-0.003964) | 0.002780 / 0.004328 (-0.001549) | 0.049407 / 0.004250 (0.045156) | 0.038208 / 0.037052 (0.001156) | 0.256275 / 0.258489 (-0.002214) | 0.293203 / 0.293841 (-0.000638) | 0.028411 / 0.128546 (-0.100135) | 0.010753 / 0.075646 (-0.064894) | 0.210420 / 0.419271 (-0.208851) | 0.036062 / 0.043533 (-0.007471) | 0.260455 / 0.255139 (0.005317) | 0.294991 / 0.283200 (0.011791) | 0.019020 / 0.141683 (-0.122662) | 1.118334 / 1.452155 (-0.333821) | 1.227391 / 1.492716 (-0.265325) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.094700 / 0.018006 (0.076694) | 0.302378 / 0.000490 (0.301888) | 0.000215 / 0.000200 (0.000015) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.018745 / 0.037411 (-0.018667) | 0.061103 / 0.014526 (0.046578) | 0.075369 / 0.176557 (-0.101188) | 0.121573 / 0.737135 (-0.615563) | 0.076898 / 0.296338 (-0.219440) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.284143 / 0.215209 (0.068934) | 2.774298 / 2.077655 (0.696644) | 1.483557 / 1.504120 (-0.020563) | 1.365091 / 1.541195 (-0.176104) | 1.390170 / 1.468490 (-0.078320) | 0.561179 / 4.584777 (-4.023598) | 2.401654 / 3.745712 (-1.344058) | 2.782628 / 5.269862 (-2.487233) | 1.731497 / 4.565676 (-2.834179) | 0.061798 / 0.424275 (-0.362477) | 0.004998 / 0.007607 (-0.002609) | 0.336920 / 0.226044 (0.110875) | 3.371891 / 2.268929 (1.102963) | 1.832173 / 55.444624 (-53.612452) | 1.573515 / 6.876477 (-5.302962) | 1.595609 / 2.142072 (-0.546463) | 0.647652 / 4.805227 (-4.157575) | 0.118501 / 6.500664 (-6.382164) | 0.042521 / 0.075469 (-0.032948) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939310 / 1.841788 (-0.902478) | 11.459855 / 8.074308 (3.385547) | 10.677954 / 10.191392 (0.486562) | 0.141029 / 0.680424 (-0.539395) | 0.014321 / 0.534201 (-0.519880) | 0.306679 / 0.579283 (-0.272604) | 0.262303 / 0.434364 (-0.172061) | 0.327422 / 0.540337 (-0.212915) | 0.436159 / 1.386936 (-0.950777) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005430 / 0.011353 (-0.005923) | 0.003646 / 0.011008 (-0.007362) | 0.049272 / 0.038508 (0.010764) | 0.075367 / 0.023109 (0.052257) | 0.275959 / 0.275898 (0.000061) | 0.296317 / 0.323480 (-0.027163) | 0.004129 / 0.007986 (-0.003857) | 0.002731 / 0.004328 (-0.001597) | 0.048475 / 0.004250 (0.044225) | 0.041571 / 0.037052 (0.004518) | 0.277993 / 0.258489 (0.019504) | 0.298709 / 0.293841 (0.004868) | 0.033117 / 0.128546 (-0.095429) | 0.010914 / 0.075646 (-0.064732) | 0.057599 / 0.419271 (-0.361673) | 0.033354 / 0.043533 (-0.010179) | 0.275669 / 0.255139 (0.020530) | 0.288451 / 0.283200 (0.005251) | 0.019953 / 0.141683 (-0.121729) | 1.148608 / 1.452155 (-0.303547) | 1.184818 / 1.492716 (-0.307898) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.099566 / 0.018006 (0.081560) | 0.344935 / 0.000490 (0.344445) | 0.000221 / 0.000200 (0.000021) | 0.000043 / 0.000054 (-0.000012) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021925 / 0.037411 (-0.015486) | 0.068623 / 0.014526 (0.054097) | 0.081533 / 0.176557 (-0.095024) | 0.120996 / 0.737135 (-0.616139) | 0.082495 / 0.296338 (-0.213844) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.294990 / 0.215209 (0.079781) | 2.892344 / 2.077655 (0.814690) | 1.611090 / 1.504120 (0.106970) | 1.496072 / 1.541195 (-0.045123) | 1.486069 / 1.468490 (0.017579) | 0.569769 / 4.584777 (-4.015008) | 2.477623 / 3.745712 (-1.268089) | 2.819576 / 5.269862 (-2.450286) | 1.745717 / 4.565676 (-2.819959) | 0.063763 / 0.424275 (-0.360512) | 0.004970 / 0.007607 (-0.002637) | 0.344879 / 0.226044 (0.118834) | 3.452795 / 2.268929 (1.183867) | 1.964468 / 55.444624 (-53.480156) | 1.674526 / 6.876477 (-5.201951) | 1.679716 / 2.142072 (-0.462356) | 0.650005 / 4.805227 (-4.155222) | 0.117019 / 6.500664 (-6.383646) | 0.048297 / 0.075469 (-0.027172) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.965422 / 1.841788 (-0.876366) | 11.989414 / 8.074308 (3.915106) | 10.938462 / 10.191392 (0.747070) | 0.140089 / 0.680424 (-0.540334) | 0.015533 / 0.534201 (-0.518668) | 0.292188 / 0.579283 (-0.287095) | 0.277903 / 0.434364 (-0.156461) | 0.326164 / 0.540337 (-0.214173) | 0.565674 / 1.386936 (-0.821262) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#d78f07091bc42c41bea068bf1b6116e2bde46a6f \"CML watermark\")\n" ]
2023-12-06T08:34:53Z
2023-12-06T09:24:11Z
2023-12-06T09:17:52Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6477.diff", "html_url": "https://github.com/huggingface/datasets/pull/6477", "merged_at": "2023-12-06T09:17:52Z", "patch_url": "https://github.com/huggingface/datasets/pull/6477.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6477" }
Fix #6476.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6477/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6477/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6487
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6487/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6487/comments
https://api.github.com/repos/huggingface/datasets/issues/6487/events
https://github.com/huggingface/datasets/pull/6487
2,035,424,254
PR_kwDODunzps5hqyfV
6,487
Update builder hash with info
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6487). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.", "Closing this one in favor of https://github.com/huggingface/datasets/pull/6458/commits/565c294fc12bc547730a023a610ed4f92313d8fb in https://github.com/huggingface/datasets/pull/6458" ]
2023-12-11T11:09:16Z
2023-12-11T11:41:34Z
2023-12-11T11:41:34Z
MEMBER
null
1
{ "diff_url": "https://github.com/huggingface/datasets/pull/6487.diff", "html_url": "https://github.com/huggingface/datasets/pull/6487", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/6487.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6487" }
Currently if you change the `dataset_info` of a dataset (e.g. in the YAML part of the README.md), the cache ignores this change. This is problematic because you want to regenerate a dataset if you change the features or the split sizes for example (e.g. after push_to_hub) Ideally we should take the resolved files into account as well but this will be for another PR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6487/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6487/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5528
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5528/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5528/comments
https://api.github.com/repos/huggingface/datasets/issues/5528/events
https://github.com/huggingface/datasets/pull/5528
1,582,195,085
PR_kwDODunzps5J13wC
5,528
Push to hub in a pull request
{ "avatar_url": "https://avatars.githubusercontent.com/u/38854604?v=4", "events_url": "https://api.github.com/users/AJDERS/events{/privacy}", "followers_url": "https://api.github.com/users/AJDERS/followers", "following_url": "https://api.github.com/users/AJDERS/following{/other_user}", "gists_url": "https://api.github.com/users/AJDERS/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/AJDERS", "id": 38854604, "login": "AJDERS", "node_id": "MDQ6VXNlcjM4ODU0NjA0", "organizations_url": "https://api.github.com/users/AJDERS/orgs", "received_events_url": "https://api.github.com/users/AJDERS/received_events", "repos_url": "https://api.github.com/users/AJDERS/repos", "site_admin": false, "starred_url": "https://api.github.com/users/AJDERS/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/AJDERS/subscriptions", "type": "User", "url": "https://api.github.com/users/AJDERS" }
[]
open
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5528). All of your documentation changes will be reflected on that endpoint.", "It seems that the parameter `create_pr` is available for [`0.8.0`](https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file) (its not here: [`0.7.0`](https://huggingface.co/docs/huggingface_hub/v0.7.0.rc0/en/package_reference/hf_api#huggingface_hub.HfApi.upload_file)) and onwards. I included a warning, informing the user that no PR was created.", "@nateraw you are completely right! Actually, the dataset shards is never added to the created pr, only the metadata, as the code is now. Ill look into you suggestion asap. Thank!", "@nateraw Nothing more to add, that's a perfect usage of `huggingface_hub` as far as I can tell ! :fire: \r\n\r\nA very nit improvement would be to use the [for .. else ... python statement](https://book.pythontips.com/en/latest/for_-_else.html).\r\ni.e:\r\n\r\n```py\r\nif create_pr is True and revision is not None:\r\n for discussion in get_repo_discussions(repo_id, repo_type='dataset'):\r\n if discussion.is_pull_request and discussion.git_reference == revision:\r\n create_pr = False\r\n break\r\n else:\r\n raise ValueError(\"Provided revision not found\")\r\n```\r\nNo need for the `revision_found` temporary flag when do so. Yeah ok, it's niche :wink: ", "I added the suggestions from @nateraw and @Wauplin .", "> Thanks. Some comments/suggestions below...\r\n> \r\n> Why have you removed the test for create_pr? You could add it again and just add a pytest skipif when version of huggingface_hub is lower than 0.8.1.\r\n\r\nI have added the test again. I removed it because i kept getting errors when calling `create_pull_request` with `repo_id=ds_name` where `temporary_repo = ds_name`, and thought i might look more thoroughly at it later. I have added a test called `test_test` showing this, it gives:\r\n```\r\ntests/test_upstream_hub.py:360: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n return fn(*args, **kwargs)\r\n.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3451: in create_pull_request\r\n return self.create_discussion(\r\n.venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n return fn(*args, **kwargs)\r\n.venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3393: in create_discussion\r\n hf_raise_for_status(resp)\r\n(...)\r\nE huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63ecd2cb-2cf2557a332c86ad27f687b3)\r\nE \r\nE Repository Not Found for url: https://huggingface.co/api/models/__DUMMY_TRANSFORMERS_USER__/test-16764648321590/discussions.\r\nE Please make sure you specified the correct `repo_id` and `repo_type`.\r\nE If you are trying to access a private or gated repo, make sure you are authenticated.\r\nE Invalid username or password.\r\n```", "> > Thanks. Some comments/suggestions below...\r\n> > Why have you removed the test for create_pr? You could add it again and just add a pytest skipif when version of huggingface_hub is lower than 0.8.1.\r\n> \r\n> I have added the test again. I removed it because i kept getting errors when calling `create_pull_request` with `repo_id=ds_name` where `temporary_repo = ds_name`, and thought i might look more thoroughly at it later. I have added a test called `test_test` showing this, it gives:\r\n> \r\n> ```\r\n> tests/test_upstream_hub.py:360: \r\n> _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n> return fn(*args, **kwargs)\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3451: in create_pull_request\r\n> return self.create_discussion(\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/utils/_validators.py:124: in _inner_fn\r\n> return fn(*args, **kwargs)\r\n> .venv/lib/python3.10/site-packages/huggingface_hub/hf_api.py:3393: in create_discussion\r\n> hf_raise_for_status(resp)\r\n> (...)\r\n> E huggingface_hub.utils._errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-63ecd2cb-2cf2557a332c86ad27f687b3)\r\n> E \r\n> E Repository Not Found for url: https://huggingface.co/api/models/__DUMMY_TRANSFORMERS_USER__/test-16764648321590/discussions.\r\n> E Please make sure you specified the correct `repo_id` and `repo_type`.\r\n> E If you are trying to access a private or gated repo, make sure you are authenticated.\r\n> E Invalid username or password.\r\n> ```\r\n\r\n@albertvillanova, @lhoestq , FYI I have looked at this again, and i haven't figured it out, so the test`test_push_dataset_to_hub_with_pull_request` and the minimal example `test_test` are still failing locally, while the other tests succeed. Do you have any advice?", "I tried to move all of the \"create pr safely\"-logic to a seperate function in `_hf_hub_fixes`. I looked at how the exceptions were raised before `huggingface_hub.utils.RepositoryNotFoundError`existed, and make changes accordingly. ", "`create_pr` was set during `push_to_hub`, even though it was `None` from the outset, hence causing tests to fail for older versions of `huggingface_hub`. This is now fixed.\r\n\r\nWith the implementation of `_hf_hub_fixes.upload_file` the function call expected `commit_message`, `commit_description`. If these are not set we call the function without them, even though we are on a version of `huggingface_hub` where they are not available in `upload_file`.\r\n\r\nWhen `huggingface_hub < 0.5.0` we assume `repo_id` of them form `organisation/name`, so now that we are calling `create_repo` in the tests with `repo_id` not of this form, we need to handle this case, this is now done.\r\n\r\nMany tests failed for `dataset_dict` for the above reasons, so the fixes from `arrow_dataset.py` were also added to `dataset_dict.py`. \r\n\r\n**All tests are now passing locally for `huggingface_hub==0.2.0` and `huggingface_hub==0.12.1`…** Im sorry I should have downgraded and went through this a long time ago, but I didn’t realise the extend of these version fixes until recently…", "Hi ! FYI bumped the `huggingface-hub` dependency to 0.11 and removed the `_hf_hub_fixes.py` - which should make this PR much easier", "Just now finding this - seems like a cool issue to contribute to. If any more help is needed please ping me! @AJDERS " ]
2023-02-13T11:43:47Z
2023-10-06T21:58:02Z
null
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5528.diff", "html_url": "https://github.com/huggingface/datasets/pull/5528", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/5528.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5528" }
Fixes #5492. Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5528/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5528/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3075/comments
https://api.github.com/repos/huggingface/datasets/issues/3075/events
https://github.com/huggingface/datasets/pull/3075
1,026,103,388
PR_kwDODunzps4tL75E
3,075
Updates LexGLUE and MultiEURLEX README.md files
{ "avatar_url": "https://avatars.githubusercontent.com/u/1626984?v=4", "events_url": "https://api.github.com/users/iliaschalkidis/events{/privacy}", "followers_url": "https://api.github.com/users/iliaschalkidis/followers", "following_url": "https://api.github.com/users/iliaschalkidis/following{/other_user}", "gists_url": "https://api.github.com/users/iliaschalkidis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/iliaschalkidis", "id": 1626984, "login": "iliaschalkidis", "node_id": "MDQ6VXNlcjE2MjY5ODQ=", "organizations_url": "https://api.github.com/users/iliaschalkidis/orgs", "received_events_url": "https://api.github.com/users/iliaschalkidis/received_events", "repos_url": "https://api.github.com/users/iliaschalkidis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/iliaschalkidis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/iliaschalkidis/subscriptions", "type": "User", "url": "https://api.github.com/users/iliaschalkidis" }
[]
closed
false
null
[]
null
[]
2021-10-14T08:19:16Z
2021-10-18T10:13:40Z
2021-10-18T10:13:40Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3075.diff", "html_url": "https://github.com/huggingface/datasets/pull/3075", "merged_at": "2021-10-18T10:13:40Z", "patch_url": "https://github.com/huggingface/datasets/pull/3075.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3075" }
Updates LexGLUE and MultiEURLEX README.md files - Fix leaderboard in LexGLUE. - Fix an error in the CaseHOLD data example. - Turn MultiEURLEX dataset statistics table into HTML to nicely render in HF website.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3075/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3075/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2846
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2846/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2846/comments
https://api.github.com/repos/huggingface/datasets/issues/2846/events
https://github.com/huggingface/datasets/issues/2846
981,587,590
MDU6SXNzdWU5ODE1ODc1OTA=
2,846
Negative timezone
{ "avatar_url": "https://avatars.githubusercontent.com/u/7156771?v=4", "events_url": "https://api.github.com/users/jadermcs/events{/privacy}", "followers_url": "https://api.github.com/users/jadermcs/followers", "following_url": "https://api.github.com/users/jadermcs/following{/other_user}", "gists_url": "https://api.github.com/users/jadermcs/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jadermcs", "id": 7156771, "login": "jadermcs", "node_id": "MDQ6VXNlcjcxNTY3NzE=", "organizations_url": "https://api.github.com/users/jadermcs/orgs", "received_events_url": "https://api.github.com/users/jadermcs/received_events", "repos_url": "https://api.github.com/users/jadermcs/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jadermcs/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jadermcs/subscriptions", "type": "User", "url": "https://api.github.com/users/jadermcs" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Fixed by #2847." ]
2021-08-27T20:50:33Z
2021-09-10T11:51:07Z
2021-09-10T11:51:07Z
CONTRIBUTOR
null
null
null
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```python # Where the timestamp column has a tz of -03:00 datasets = load_dataset('parquet', data_files={'train': train_files, 'validation': validation_files, 'test': test_files}, cache_dir="./cache_teste/") ``` ## Expected results The -03:00 is a valid tz so the regex should accept this without raising an error. ## Actual results As this regex disaproves a valid tz it raises the following error: ```python raise ValueError( f"{datasets_dtype} is not a validly formatted string representation of a pyarrow timestamp." f"Examples include timestamp[us] or timestamp[us, tz=America/New_York]" f"See: https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp" ) ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.11.0 - Platform: Ubuntu 20.04 - Python version: 3.8 - PyArrow version: 5.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2846/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2846/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6462
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6462/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6462/comments
https://api.github.com/repos/huggingface/datasets/issues/6462/events
https://github.com/huggingface/datasets/pull/6462
2,019,238,388
PR_kwDODunzps5gz68T
6,462
Missing DatasetNotFoundError
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005594 / 0.011353 (-0.005759) | 0.003672 / 0.011008 (-0.007337) | 0.062796 / 0.038508 (0.024288) | 0.059432 / 0.023109 (0.036323) | 0.253976 / 0.275898 (-0.021922) | 0.281155 / 0.323480 (-0.042325) | 0.003023 / 0.007986 (-0.004962) | 0.003320 / 0.004328 (-0.001008) | 0.049059 / 0.004250 (0.044809) | 0.040252 / 0.037052 (0.003200) | 0.259526 / 0.258489 (0.001037) | 0.318798 / 0.293841 (0.024957) | 0.027883 / 0.128546 (-0.100663) | 0.010883 / 0.075646 (-0.064763) | 0.206948 / 0.419271 (-0.212323) | 0.036335 / 0.043533 (-0.007198) | 0.253209 / 0.255139 (-0.001930) | 0.275173 / 0.283200 (-0.008026) | 0.020365 / 0.141683 (-0.121318) | 1.121630 / 1.452155 (-0.330524) | 1.174680 / 1.492716 (-0.318036) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.098372 / 0.018006 (0.080366) | 0.309949 / 0.000490 (0.309460) | 0.000225 / 0.000200 (0.000025) | 0.000043 / 0.000054 (-0.000011) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019495 / 0.037411 (-0.017916) | 0.062321 / 0.014526 (0.047795) | 0.074525 / 0.176557 (-0.102031) | 0.121832 / 0.737135 (-0.615303) | 0.077612 / 0.296338 (-0.218727) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.288156 / 0.215209 (0.072947) | 2.816411 / 2.077655 (0.738756) | 1.497926 / 1.504120 (-0.006193) | 1.378137 / 1.541195 (-0.163058) | 1.446466 / 1.468490 (-0.022024) | 0.566195 / 4.584777 (-4.018582) | 2.391933 / 3.745712 (-1.353780) | 2.929290 / 5.269862 (-2.340572) | 1.828215 / 4.565676 (-2.737462) | 0.063312 / 0.424275 (-0.360963) | 0.005199 / 0.007607 (-0.002408) | 0.342883 / 0.226044 (0.116838) | 3.378388 / 2.268929 (1.109459) | 1.865710 / 55.444624 (-53.578915) | 1.573442 / 6.876477 (-5.303035) | 1.631228 / 2.142072 (-0.510845) | 0.651614 / 4.805227 (-4.153613) | 0.118177 / 6.500664 (-6.382487) | 0.043303 / 0.075469 (-0.032166) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.950694 / 1.841788 (-0.891094) | 12.559851 / 8.074308 (4.485543) | 10.751123 / 10.191392 (0.559731) | 0.143107 / 0.680424 (-0.537317) | 0.014469 / 0.534201 (-0.519732) | 0.289531 / 0.579283 (-0.289752) | 0.267316 / 0.434364 (-0.167047) | 0.327748 / 0.540337 (-0.212590) | 0.437758 / 1.386936 (-0.949178) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005669 / 0.011353 (-0.005684) | 0.003831 / 0.011008 (-0.007177) | 0.049096 / 0.038508 (0.010588) | 0.061408 / 0.023109 (0.038299) | 0.274571 / 0.275898 (-0.001327) | 0.299978 / 0.323480 (-0.023501) | 0.004216 / 0.007986 (-0.003769) | 0.002848 / 0.004328 (-0.001480) | 0.048755 / 0.004250 (0.044504) | 0.042576 / 0.037052 (0.005524) | 0.276781 / 0.258489 (0.018292) | 0.300903 / 0.293841 (0.007062) | 0.030243 / 0.128546 (-0.098303) | 0.010967 / 0.075646 (-0.064679) | 0.057879 / 0.419271 (-0.361392) | 0.033206 / 0.043533 (-0.010327) | 0.277620 / 0.255139 (0.022481) | 0.296263 / 0.283200 (0.013064) | 0.019022 / 0.141683 (-0.122660) | 1.125615 / 1.452155 (-0.326539) | 1.278016 / 1.492716 (-0.214700) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.096836 / 0.018006 (0.078830) | 0.307491 / 0.000490 (0.307001) | 0.000230 / 0.000200 (0.000030) | 0.000044 / 0.000054 (-0.000010) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021552 / 0.037411 (-0.015859) | 0.071099 / 0.014526 (0.056573) | 0.082432 / 0.176557 (-0.094124) | 0.121826 / 0.737135 (-0.615310) | 0.084902 / 0.296338 (-0.211437) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.328113 / 0.215209 (0.112904) | 2.989613 / 2.077655 (0.911959) | 1.604904 / 1.504120 (0.100784) | 1.485459 / 1.541195 (-0.055735) | 1.524829 / 1.468490 (0.056339) | 0.580589 / 4.584777 (-4.004188) | 2.440087 / 3.745712 (-1.305625) | 2.944697 / 5.269862 (-2.325164) | 1.832728 / 4.565676 (-2.732949) | 0.064423 / 0.424275 (-0.359852) | 0.004991 / 0.007607 (-0.002616) | 0.357878 / 0.226044 (0.131834) | 3.515415 / 2.268929 (1.246487) | 1.964492 / 55.444624 (-53.480132) | 1.684058 / 6.876477 (-5.192418) | 1.730294 / 2.142072 (-0.411778) | 0.661228 / 4.805227 (-4.143999) | 0.122894 / 6.500664 (-6.377770) | 0.041776 / 0.075469 (-0.033693) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.969849 / 1.841788 (-0.871939) | 12.897067 / 8.074308 (4.822758) | 10.908200 / 10.191392 (0.716808) | 0.141139 / 0.680424 (-0.539285) | 0.015377 / 0.534201 (-0.518824) | 0.288625 / 0.579283 (-0.290658) | 0.279020 / 0.434364 (-0.155344) | 0.328386 / 0.540337 (-0.211951) | 0.590833 / 1.386936 (-0.796103) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#39ea60eaabb05d8ee38c072f375816cf87fce1a9 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.004986 / 0.011353 (-0.006367) | 0.003070 / 0.011008 (-0.007938) | 0.062433 / 0.038508 (0.023925) | 0.050639 / 0.023109 (0.027530) | 0.241807 / 0.275898 (-0.034091) | 0.262517 / 0.323480 (-0.060963) | 0.003826 / 0.007986 (-0.004160) | 0.002602 / 0.004328 (-0.001727) | 0.048508 / 0.004250 (0.044257) | 0.037276 / 0.037052 (0.000224) | 0.245757 / 0.258489 (-0.012732) | 0.272969 / 0.293841 (-0.020871) | 0.027139 / 0.128546 (-0.101407) | 0.010265 / 0.075646 (-0.065381) | 0.207279 / 0.419271 (-0.211992) | 0.035312 / 0.043533 (-0.008221) | 0.247535 / 0.255139 (-0.007604) | 0.260668 / 0.283200 (-0.022532) | 0.016496 / 0.141683 (-0.125187) | 1.137510 / 1.452155 (-0.314645) | 1.167870 / 1.492716 (-0.324847) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091743 / 0.018006 (0.073736) | 0.298649 / 0.000490 (0.298159) | 0.000208 / 0.000200 (0.000009) | 0.000053 / 0.000054 (-0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.019053 / 0.037411 (-0.018359) | 0.060300 / 0.014526 (0.045774) | 0.072154 / 0.176557 (-0.104402) | 0.120293 / 0.737135 (-0.616842) | 0.073923 / 0.296338 (-0.222415) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.283058 / 0.215209 (0.067849) | 2.769503 / 2.077655 (0.691849) | 1.457016 / 1.504120 (-0.047104) | 1.335753 / 1.541195 (-0.205441) | 1.325986 / 1.468490 (-0.142504) | 0.562553 / 4.584777 (-4.022224) | 2.406144 / 3.745712 (-1.339568) | 2.778063 / 5.269862 (-2.491799) | 1.782199 / 4.565676 (-2.783477) | 0.062490 / 0.424275 (-0.361785) | 0.004912 / 0.007607 (-0.002695) | 0.338500 / 0.226044 (0.112456) | 3.309746 / 2.268929 (1.040818) | 1.819693 / 55.444624 (-53.624931) | 1.510295 / 6.876477 (-5.366182) | 1.578402 / 2.142072 (-0.563671) | 0.637517 / 4.805227 (-4.167710) | 0.117018 / 6.500664 (-6.383647) | 0.048149 / 0.075469 (-0.027320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.939424 / 1.841788 (-0.902364) | 11.494891 / 8.074308 (3.420583) | 10.115194 / 10.191392 (-0.076198) | 0.126751 / 0.680424 (-0.553673) | 0.013567 / 0.534201 (-0.520634) | 0.282501 / 0.579283 (-0.296782) | 0.260594 / 0.434364 (-0.173770) | 0.325940 / 0.540337 (-0.214397) | 0.426186 / 1.386936 (-0.960750) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005405 / 0.011353 (-0.005948) | 0.003557 / 0.011008 (-0.007451) | 0.051139 / 0.038508 (0.012631) | 0.053446 / 0.023109 (0.030337) | 0.268051 / 0.275898 (-0.007847) | 0.292343 / 0.323480 (-0.031136) | 0.004716 / 0.007986 (-0.003269) | 0.002677 / 0.004328 (-0.001651) | 0.047634 / 0.004250 (0.043384) | 0.041062 / 0.037052 (0.004009) | 0.269225 / 0.258489 (0.010736) | 0.297462 / 0.293841 (0.003621) | 0.029292 / 0.128546 (-0.099254) | 0.010947 / 0.075646 (-0.064699) | 0.057845 / 0.419271 (-0.361426) | 0.032793 / 0.043533 (-0.010740) | 0.265308 / 0.255139 (0.010169) | 0.288242 / 0.283200 (0.005043) | 0.018311 / 0.141683 (-0.123372) | 1.140957 / 1.452155 (-0.311197) | 1.204883 / 1.492716 (-0.287833) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.091375 / 0.018006 (0.073368) | 0.285922 / 0.000490 (0.285432) | 0.000238 / 0.000200 (0.000038) | 0.000051 / 0.000054 (-0.000004) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.021277 / 0.037411 (-0.016134) | 0.068853 / 0.014526 (0.054328) | 0.081002 / 0.176557 (-0.095555) | 0.120998 / 0.737135 (-0.616138) | 0.082741 / 0.296338 (-0.213598) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.299398 / 0.215209 (0.084189) | 2.909622 / 2.077655 (0.831967) | 1.624381 / 1.504120 (0.120261) | 1.501683 / 1.541195 (-0.039512) | 1.523045 / 1.468490 (0.054555) | 0.548960 / 4.584777 (-4.035817) | 2.413297 / 3.745712 (-1.332415) | 2.817852 / 5.269862 (-2.452010) | 1.754407 / 4.565676 (-2.811270) | 0.061912 / 0.424275 (-0.362363) | 0.004880 / 0.007607 (-0.002727) | 0.353989 / 0.226044 (0.127944) | 3.496147 / 2.268929 (1.227219) | 2.003026 / 55.444624 (-53.441598) | 1.702013 / 6.876477 (-5.174463) | 1.680935 / 2.142072 (-0.461137) | 0.630183 / 4.805227 (-4.175044) | 0.113786 / 6.500664 (-6.386878) | 0.040061 / 0.075469 (-0.035408) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.957218 / 1.841788 (-0.884569) | 11.914469 / 8.074308 (3.840160) | 10.488896 / 10.191392 (0.297504) | 0.129292 / 0.680424 (-0.551132) | 0.016603 / 0.534201 (-0.517598) | 0.287367 / 0.579283 (-0.291916) | 0.271332 / 0.434364 (-0.163032) | 0.325577 / 0.540337 (-0.214761) | 0.560553 / 1.386936 (-0.826383) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#2d31e434bbeafdf6a70cb80539342d8fe5f5fd27 \"CML watermark\")\n" ]
2023-11-30T18:09:43Z
2023-11-30T18:36:40Z
2023-11-30T18:30:30Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6462.diff", "html_url": "https://github.com/huggingface/datasets/pull/6462", "merged_at": "2023-11-30T18:30:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/6462.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6462" }
continuation of https://github.com/huggingface/datasets/pull/6431 this should fix the CI in https://github.com/huggingface/datasets/pull/6458 too
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6462/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6462/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5387
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5387/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5387/comments
https://api.github.com/repos/huggingface/datasets/issues/5387/events
https://github.com/huggingface/datasets/issues/5387
1,508,740,177
I_kwDODunzps5Z7YxR
5,387
Missing documentation page : improve-performance
{ "avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4", "events_url": "https://api.github.com/users/astariul/events{/privacy}", "followers_url": "https://api.github.com/users/astariul/followers", "following_url": "https://api.github.com/users/astariul/following{/other_user}", "gists_url": "https://api.github.com/users/astariul/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/astariul", "id": 43774355, "login": "astariul", "node_id": "MDQ6VXNlcjQzNzc0MzU1", "organizations_url": "https://api.github.com/users/astariul/orgs", "received_events_url": "https://api.github.com/users/astariul/received_events", "repos_url": "https://api.github.com/users/astariul/repos", "site_admin": false, "starred_url": "https://api.github.com/users/astariul/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/astariul/subscriptions", "type": "User", "url": "https://api.github.com/users/astariul" }
[]
closed
false
null
[]
null
[ "Hi! Our documentation builder does not support links to sections, hence the bug. This is the link it should point to https://huggingface.co/docs/datasets/v2.8.0/en/cache#improve-performance." ]
2022-12-23T01:12:57Z
2023-01-24T16:33:40Z
2023-01-24T16:33:40Z
NONE
null
null
null
### Describe the bug Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing. The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory ### Steps to reproduce the bug Access the page and see it's missing. ### Expected behavior Not missing page ### Environment info Doesn't matter
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5387/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5387/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2518
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2518/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2518/comments
https://api.github.com/repos/huggingface/datasets/issues/2518/events
https://github.com/huggingface/datasets/pull/2518
924,654,100
MDExOlB1bGxSZXF1ZXN0NjczMjU5Nzg1
2,518
Add task templates for tydiqa and xquad
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[]
closed
false
null
[]
null
[ "Just tested TydiQA and it works fine :)" ]
2021-06-18T08:06:34Z
2021-06-18T15:01:17Z
2021-06-18T14:50:33Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2518.diff", "html_url": "https://github.com/huggingface/datasets/pull/2518", "merged_at": "2021-06-18T14:50:33Z", "patch_url": "https://github.com/huggingface/datasets/pull/2518.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2518" }
This PR adds question-answering templates to the remaining datasets that are linked to a model on the Hub. Notes: * I could not test the tydiqa implementation since I don't have enough disk space 😢 . But I am confident the template works :) * there exist other datasets like `fquad` and `mlqa` which are candidates for question-answering templates, but some work is needed to handle the ordering of nested column described in #2434
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2518/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2518/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3976
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3976/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3976/comments
https://api.github.com/repos/huggingface/datasets/issues/3976/events
https://github.com/huggingface/datasets/pull/3976
1,175,043,780
PR_kwDODunzps40uOY6
3,976
Fix main classes reference in docs
{ "avatar_url": "https://avatars.githubusercontent.com/u/24835382?v=4", "events_url": "https://api.github.com/users/qqaatw/events{/privacy}", "followers_url": "https://api.github.com/users/qqaatw/followers", "following_url": "https://api.github.com/users/qqaatw/following{/other_user}", "gists_url": "https://api.github.com/users/qqaatw/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/qqaatw", "id": 24835382, "login": "qqaatw", "node_id": "MDQ6VXNlcjI0ODM1Mzgy", "organizations_url": "https://api.github.com/users/qqaatw/orgs", "received_events_url": "https://api.github.com/users/qqaatw/received_events", "repos_url": "https://api.github.com/users/qqaatw/repos", "site_admin": false, "starred_url": "https://api.github.com/users/qqaatw/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/qqaatw/subscriptions", "type": "User", "url": "https://api.github.com/users/qqaatw" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976). All of your documentation changes will be reflected on that endpoint.", "Not sure why some section titles end with `[[datasets.xxx]]`, like this: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3976/en/package_reference/main_classes#datasetdict[[datasets.datasetdict]]", "Thanks ! I think this has been fixed already in https://github.com/huggingface/datasets/pull/3925 though\r\n\r\nI'm closing this one then if it's fine for you" ]
2022-03-21T08:19:46Z
2022-04-12T14:19:39Z
2022-04-12T14:19:38Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3976.diff", "html_url": "https://github.com/huggingface/datasets/pull/3976", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/3976.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3976" }
Currently the section index (on the page's right side) of the [main classes reference](https://huggingface.co/docs/datasets/master/en/package_reference/main_classes) incorrectly displays `Tensor returned:`, this PR fixes this issue by wrapping code examples in this page with markdown code block. There are other examples in datasets library having this issue.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3976/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3976/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/330
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/330/comments
https://api.github.com/repos/huggingface/datasets/issues/330/events
https://github.com/huggingface/datasets/pull/330
648,525,720
MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw
330
Doc red
{ "avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4", "events_url": "https://api.github.com/users/ghomasHudson/events{/privacy}", "followers_url": "https://api.github.com/users/ghomasHudson/followers", "following_url": "https://api.github.com/users/ghomasHudson/following{/other_user}", "gists_url": "https://api.github.com/users/ghomasHudson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ghomasHudson", "id": 13795113, "login": "ghomasHudson", "node_id": "MDQ6VXNlcjEzNzk1MTEz", "organizations_url": "https://api.github.com/users/ghomasHudson/orgs", "received_events_url": "https://api.github.com/users/ghomasHudson/received_events", "repos_url": "https://api.github.com/users/ghomasHudson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ghomasHudson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ghomasHudson/subscriptions", "type": "User", "url": "https://api.github.com/users/ghomasHudson" }
[]
closed
false
null
[]
null
[]
2020-06-30T22:05:31Z
2020-07-06T12:10:39Z
2020-07-05T12:27:29Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/330.diff", "html_url": "https://github.com/huggingface/datasets/pull/330", "merged_at": "2020-07-05T12:27:29Z", "patch_url": "https://github.com/huggingface/datasets/pull/330.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/330" }
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this. - As well as the relation id, the full relation name is mapped from `rel_info.json` - I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable. - Used the fix from #319 to allow nested sequences of dicts.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/330/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/330/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5958
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5958/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5958/comments
https://api.github.com/repos/huggingface/datasets/issues/5958/events
https://github.com/huggingface/datasets/pull/5958
1,757,265,971
PR_kwDODunzps5TA3__
5,958
set dev version
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5958). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006232 / 0.011353 (-0.005121) | 0.003788 / 0.011008 (-0.007220) | 0.100014 / 0.038508 (0.061506) | 0.036488 / 0.023109 (0.013379) | 0.306255 / 0.275898 (0.030357) | 0.363337 / 0.323480 (0.039857) | 0.004765 / 0.007986 (-0.003221) | 0.002935 / 0.004328 (-0.001394) | 0.078897 / 0.004250 (0.074647) | 0.052221 / 0.037052 (0.015169) | 0.315169 / 0.258489 (0.056680) | 0.353050 / 0.293841 (0.059209) | 0.029059 / 0.128546 (-0.099488) | 0.008599 / 0.075646 (-0.067047) | 0.318770 / 0.419271 (-0.100502) | 0.046631 / 0.043533 (0.003098) | 0.303728 / 0.255139 (0.048589) | 0.332379 / 0.283200 (0.049180) | 0.021164 / 0.141683 (-0.120519) | 1.576963 / 1.452155 (0.124808) | 1.629575 / 1.492716 (0.136859) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.204246 / 0.018006 (0.186240) | 0.426600 / 0.000490 (0.426110) | 0.004336 / 0.000200 (0.004136) | 0.000082 / 0.000054 (0.000027) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.024039 / 0.037411 (-0.013372) | 0.098240 / 0.014526 (0.083715) | 0.108889 / 0.176557 (-0.067668) | 0.170827 / 0.737135 (-0.566308) | 0.111288 / 0.296338 (-0.185051) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.418103 / 0.215209 (0.202894) | 4.190759 / 2.077655 (2.113104) | 1.875978 / 1.504120 (0.371858) | 1.679198 / 1.541195 (0.138003) | 1.737965 / 1.468490 (0.269474) | 0.556660 / 4.584777 (-4.028117) | 3.413800 / 3.745712 (-0.331912) | 3.004999 / 5.269862 (-2.264862) | 1.464030 / 4.565676 (-3.101647) | 0.067338 / 0.424275 (-0.356937) | 0.011486 / 0.007607 (0.003879) | 0.522589 / 0.226044 (0.296544) | 5.214653 / 2.268929 (2.945724) | 2.316903 / 55.444624 (-53.127722) | 1.991941 / 6.876477 (-4.884536) | 2.110601 / 2.142072 (-0.031471) | 0.665400 / 4.805227 (-4.139828) | 0.135755 / 6.500664 (-6.364910) | 0.065980 / 0.075469 (-0.009489) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.197269 / 1.841788 (-0.644519) | 14.085205 / 8.074308 (6.010897) | 14.083360 / 10.191392 (3.891968) | 0.148054 / 0.680424 (-0.532369) | 0.016548 / 0.534201 (-0.517653) | 0.371538 / 0.579283 (-0.207745) | 0.391068 / 0.434364 (-0.043296) | 0.430589 / 0.540337 (-0.109748) | 0.529319 / 1.386936 (-0.857617) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006214 / 0.011353 (-0.005138) | 0.003846 / 0.011008 (-0.007162) | 0.078559 / 0.038508 (0.040051) | 0.037855 / 0.023109 (0.014745) | 0.437479 / 0.275898 (0.161581) | 0.497588 / 0.323480 (0.174108) | 0.003491 / 0.007986 (-0.004494) | 0.003900 / 0.004328 (-0.000428) | 0.078443 / 0.004250 (0.074193) | 0.048019 / 0.037052 (0.010967) | 0.452076 / 0.258489 (0.193587) | 0.494597 / 0.293841 (0.200756) | 0.028127 / 0.128546 (-0.100419) | 0.008549 / 0.075646 (-0.067098) | 0.082977 / 0.419271 (-0.336295) | 0.043133 / 0.043533 (-0.000400) | 0.441342 / 0.255139 (0.186203) | 0.464339 / 0.283200 (0.181139) | 0.020110 / 0.141683 (-0.121573) | 1.485181 / 1.452155 (0.033026) | 1.532019 / 1.492716 (0.039302) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.228014 / 0.018006 (0.210007) | 0.416887 / 0.000490 (0.416397) | 0.001133 / 0.000200 (0.000933) | 0.000108 / 0.000054 (0.000053) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026452 / 0.037411 (-0.010960) | 0.104328 / 0.014526 (0.089802) | 0.110045 / 0.176557 (-0.066511) | 0.164725 / 0.737135 (-0.572410) | 0.116348 / 0.296338 (-0.179990) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.483502 / 0.215209 (0.268293) | 4.829814 / 2.077655 (2.752159) | 2.505271 / 1.504120 (1.001151) | 2.305819 / 1.541195 (0.764624) | 2.348633 / 1.468490 (0.880143) | 0.562316 / 4.584777 (-4.022461) | 3.426425 / 3.745712 (-0.319287) | 1.737934 / 5.269862 (-3.531927) | 1.042616 / 4.565676 (-3.523061) | 0.068088 / 0.424275 (-0.356187) | 0.011735 / 0.007607 (0.004128) | 0.586339 / 0.226044 (0.360295) | 5.861283 / 2.268929 (3.592354) | 2.953956 / 55.444624 (-52.490668) | 2.626611 / 6.876477 (-4.249865) | 2.687978 / 2.142072 (0.545906) | 0.672748 / 4.805227 (-4.132479) | 0.137231 / 6.500664 (-6.363433) | 0.068149 / 0.075469 (-0.007320) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.323139 / 1.841788 (-0.518649) | 14.503102 / 8.074308 (6.428794) | 14.092102 / 10.191392 (3.900710) | 0.165395 / 0.680424 (-0.515028) | 0.016898 / 0.534201 (-0.517303) | 0.366905 / 0.579283 (-0.212378) | 0.396671 / 0.434364 (-0.037692) | 0.421831 / 0.540337 (-0.118506) | 0.514075 / 1.386936 (-0.872861) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#9d4238c132dd44b9a6e1dfe7101228bdeb538d57 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007778 / 0.011353 (-0.003575) | 0.004624 / 0.011008 (-0.006384) | 0.123426 / 0.038508 (0.084918) | 0.052209 / 0.023109 (0.029100) | 0.341084 / 0.275898 (0.065186) | 0.421905 / 0.323480 (0.098425) | 0.005768 / 0.007986 (-0.002217) | 0.003647 / 0.004328 (-0.000682) | 0.085569 / 0.004250 (0.081319) | 0.070473 / 0.037052 (0.033421) | 0.356626 / 0.258489 (0.098136) | 0.407413 / 0.293841 (0.113572) | 0.038800 / 0.128546 (-0.089746) | 0.010289 / 0.075646 (-0.065357) | 0.462707 / 0.419271 (0.043436) | 0.060390 / 0.043533 (0.016858) | 0.349805 / 0.255139 (0.094666) | 0.355288 / 0.283200 (0.072088) | 0.025364 / 0.141683 (-0.116318) | 1.745720 / 1.452155 (0.293565) | 1.852764 / 1.492716 (0.360048) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.290582 / 0.018006 (0.272576) | 0.480044 / 0.000490 (0.479554) | 0.007658 / 0.000200 (0.007458) | 0.000100 / 0.000054 (0.000046) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031529 / 0.037411 (-0.005882) | 0.130441 / 0.014526 (0.115915) | 0.147653 / 0.176557 (-0.028904) | 0.215935 / 0.737135 (-0.521200) | 0.149871 / 0.296338 (-0.146467) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.461662 / 0.215209 (0.246453) | 4.570353 / 2.077655 (2.492698) | 2.104416 / 1.504120 (0.600297) | 1.936974 / 1.541195 (0.395779) | 2.139167 / 1.468490 (0.670677) | 0.645100 / 4.584777 (-3.939677) | 4.361536 / 3.745712 (0.615824) | 2.155960 / 5.269862 (-3.113902) | 1.207854 / 4.565676 (-3.357822) | 0.080162 / 0.424275 (-0.344113) | 0.014265 / 0.007607 (0.006658) | 0.606294 / 0.226044 (0.380250) | 5.928093 / 2.268929 (3.659165) | 2.701811 / 55.444624 (-52.742813) | 2.344490 / 6.876477 (-4.531987) | 2.435997 / 2.142072 (0.293925) | 0.761020 / 4.805227 (-4.044207) | 0.165860 / 6.500664 (-6.334804) | 0.075666 / 0.075469 (0.000197) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.427318 / 1.841788 (-0.414469) | 17.327468 / 8.074308 (9.253160) | 15.323065 / 10.191392 (5.131673) | 0.178518 / 0.680424 (-0.501905) | 0.020888 / 0.534201 (-0.513313) | 0.497891 / 0.579283 (-0.081393) | 0.487717 / 0.434364 (0.053353) | 0.581430 / 0.540337 (0.041093) | 0.703430 / 1.386936 (-0.683506) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007954 / 0.011353 (-0.003399) | 0.004442 / 0.011008 (-0.006566) | 0.090950 / 0.038508 (0.052442) | 0.054282 / 0.023109 (0.031173) | 0.424474 / 0.275898 (0.148576) | 0.531770 / 0.323480 (0.208290) | 0.004492 / 0.007986 (-0.003493) | 0.004745 / 0.004328 (0.000416) | 0.088213 / 0.004250 (0.083962) | 0.063967 / 0.037052 (0.026914) | 0.454256 / 0.258489 (0.195767) | 0.502870 / 0.293841 (0.209029) | 0.038203 / 0.128546 (-0.090343) | 0.010327 / 0.075646 (-0.065319) | 0.097809 / 0.419271 (-0.321463) | 0.062136 / 0.043533 (0.018604) | 0.426148 / 0.255139 (0.171009) | 0.467812 / 0.283200 (0.184612) | 0.029148 / 0.141683 (-0.112535) | 1.762307 / 1.452155 (0.310152) | 1.814238 / 1.492716 (0.321521) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.195676 / 0.018006 (0.177670) | 0.475382 / 0.000490 (0.474892) | 0.003070 / 0.000200 (0.002870) | 0.000112 / 0.000054 (0.000057) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.033945 / 0.037411 (-0.003466) | 0.134666 / 0.014526 (0.120140) | 0.147585 / 0.176557 (-0.028971) | 0.209472 / 0.737135 (-0.527664) | 0.154471 / 0.296338 (-0.141867) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.518132 / 0.215209 (0.302923) | 5.103423 / 2.077655 (3.025768) | 2.565207 / 1.504120 (1.061087) | 2.389454 / 1.541195 (0.848259) | 2.391706 / 1.468490 (0.923216) | 0.606463 / 4.584777 (-3.978314) | 4.392227 / 3.745712 (0.646515) | 2.067121 / 5.269862 (-3.202741) | 1.217551 / 4.565676 (-3.348125) | 0.074304 / 0.424275 (-0.349971) | 0.013418 / 0.007607 (0.005811) | 0.623327 / 0.226044 (0.397282) | 6.340233 / 2.268929 (4.071304) | 3.153948 / 55.444624 (-52.290677) | 2.824548 / 6.876477 (-4.051929) | 2.938402 / 2.142072 (0.796329) | 0.774305 / 4.805227 (-4.030922) | 0.170681 / 6.500664 (-6.329983) | 0.075895 / 0.075469 (0.000426) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.473491 / 1.841788 (-0.368296) | 17.372294 / 8.074308 (9.297986) | 15.550201 / 10.191392 (5.358809) | 0.191402 / 0.680424 (-0.489022) | 0.021401 / 0.534201 (-0.512800) | 0.484377 / 0.579283 (-0.094906) | 0.488844 / 0.434364 (0.054480) | 0.563336 / 0.540337 (0.022999) | 0.694210 / 1.386936 (-0.692726) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b96da7f51d81e52d7b587685f820b5e55f71e07d \"CML watermark\")\n" ]
2023-06-14T16:26:34Z
2023-06-14T16:34:55Z
2023-06-14T16:26:51Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5958.diff", "html_url": "https://github.com/huggingface/datasets/pull/5958", "merged_at": "2023-06-14T16:26:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/5958.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5958" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5958/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5958/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2639/comments
https://api.github.com/repos/huggingface/datasets/issues/2639/events
https://github.com/huggingface/datasets/pull/2639
943,527,463
MDExOlB1bGxSZXF1ZXN0Njg5MTQ3NDE5
2,639
Refactor patching to specific submodule
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-07-13T15:08:45Z
2021-07-13T16:52:49Z
2021-07-13T16:52:49Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2639.diff", "html_url": "https://github.com/huggingface/datasets/pull/2639", "merged_at": "2021-07-13T16:52:48Z", "patch_url": "https://github.com/huggingface/datasets/pull/2639.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2639" }
Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created. In relation with the initial approach followed in #2631.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2639/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2639/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4777
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4777/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4777/comments
https://api.github.com/repos/huggingface/datasets/issues/4777/events
https://github.com/huggingface/datasets/pull/4777
1,324,548,784
PR_kwDODunzps48cByL
4,777
Require torchaudio<0.12.0 to avoid RuntimeError
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-08-01T14:50:50Z
2022-08-02T17:35:14Z
2022-08-02T17:21:39Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4777.diff", "html_url": "https://github.com/huggingface/datasets/pull/4777", "merged_at": "2022-08-02T17:21:39Z", "patch_url": "https://github.com/huggingface/datasets/pull/4777.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4777" }
Related to: - https://github.com/huggingface/transformers/issues/18379 Fix partially #4776.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4777/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4777/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/6080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6080/comments
https://api.github.com/repos/huggingface/datasets/issues/6080/events
https://github.com/huggingface/datasets/pull/6080
1,822,667,554
PR_kwDODunzps5WdL4K
6,080
Remove README link to deprecated Colab notebook
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006458 / 0.011353 (-0.004894) | 0.003895 / 0.011008 (-0.007114) | 0.084280 / 0.038508 (0.045772) | 0.071304 / 0.023109 (0.048195) | 0.313910 / 0.275898 (0.038012) | 0.344070 / 0.323480 (0.020590) | 0.005413 / 0.007986 (-0.002573) | 0.003308 / 0.004328 (-0.001021) | 0.064570 / 0.004250 (0.060320) | 0.056824 / 0.037052 (0.019771) | 0.321102 / 0.258489 (0.062613) | 0.355834 / 0.293841 (0.061993) | 0.031252 / 0.128546 (-0.097294) | 0.008427 / 0.075646 (-0.067219) | 0.287348 / 0.419271 (-0.131924) | 0.053261 / 0.043533 (0.009728) | 0.324892 / 0.255139 (0.069753) | 0.335847 / 0.283200 (0.052647) | 0.023453 / 0.141683 (-0.118230) | 1.485456 / 1.452155 (0.033301) | 1.531329 / 1.492716 (0.038612) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.201924 / 0.018006 (0.183918) | 0.447188 / 0.000490 (0.446698) | 0.005543 / 0.000200 (0.005343) | 0.000086 / 0.000054 (0.000031) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.027586 / 0.037411 (-0.009825) | 0.082412 / 0.014526 (0.067886) | 0.094851 / 0.176557 (-0.081706) | 0.151331 / 0.737135 (-0.585804) | 0.094475 / 0.296338 (-0.201863) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.399004 / 0.215209 (0.183795) | 3.974652 / 2.077655 (1.896997) | 1.991909 / 1.504120 (0.487789) | 1.811684 / 1.541195 (0.270489) | 1.869774 / 1.468490 (0.401283) | 0.487745 / 4.584777 (-4.097032) | 3.558945 / 3.745712 (-0.186768) | 5.530468 / 5.269862 (0.260606) | 3.293147 / 4.565676 (-1.272529) | 0.057531 / 0.424275 (-0.366744) | 0.007212 / 0.007607 (-0.000395) | 0.470325 / 0.226044 (0.244281) | 4.701652 / 2.268929 (2.432723) | 2.453020 / 55.444624 (-52.991605) | 2.110152 / 6.876477 (-4.766325) | 2.314669 / 2.142072 (0.172597) | 0.615039 / 4.805227 (-4.190189) | 0.133229 / 6.500664 (-6.367435) | 0.060821 / 0.075469 (-0.014648) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296708 / 1.841788 (-0.545079) | 18.717251 / 8.074308 (10.642943) | 14.325305 / 10.191392 (4.133913) | 0.147680 / 0.680424 (-0.532744) | 0.018312 / 0.534201 (-0.515889) | 0.392766 / 0.579283 (-0.186517) | 0.403319 / 0.434364 (-0.031045) | 0.453696 / 0.540337 (-0.086641) | 0.622564 / 1.386936 (-0.764372) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006483 / 0.011353 (-0.004870) | 0.004018 / 0.011008 (-0.006991) | 0.064436 / 0.038508 (0.025928) | 0.072365 / 0.023109 (0.049256) | 0.387532 / 0.275898 (0.111634) | 0.418175 / 0.323480 (0.094695) | 0.005453 / 0.007986 (-0.002533) | 0.003368 / 0.004328 (-0.000961) | 0.064896 / 0.004250 (0.060645) | 0.057018 / 0.037052 (0.019966) | 0.406596 / 0.258489 (0.148107) | 0.431194 / 0.293841 (0.137353) | 0.031788 / 0.128546 (-0.096759) | 0.008532 / 0.075646 (-0.067114) | 0.070605 / 0.419271 (-0.348666) | 0.053317 / 0.043533 (0.009785) | 0.391930 / 0.255139 (0.136791) | 0.406071 / 0.283200 (0.122872) | 0.028652 / 0.141683 (-0.113030) | 1.487677 / 1.452155 (0.035522) | 1.546071 / 1.492716 (0.053355) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.220063 / 0.018006 (0.202056) | 0.441111 / 0.000490 (0.440621) | 0.006066 / 0.000200 (0.005867) | 0.000084 / 0.000054 (0.000030) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035179 / 0.037411 (-0.002232) | 0.096745 / 0.014526 (0.082219) | 0.108171 / 0.176557 (-0.068386) | 0.164590 / 0.737135 (-0.572545) | 0.109425 / 0.296338 (-0.186913) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.408101 / 0.215209 (0.192892) | 4.062961 / 2.077655 (1.985306) | 2.101849 / 1.504120 (0.597730) | 1.935919 / 1.541195 (0.394724) | 1.993749 / 1.468490 (0.525259) | 0.487788 / 4.584777 (-4.096989) | 3.533972 / 3.745712 (-0.211740) | 3.218448 / 5.269862 (-2.051414) | 2.002322 / 4.565676 (-2.563355) | 0.057371 / 0.424275 (-0.366904) | 0.007704 / 0.007607 (0.000097) | 0.491695 / 0.226044 (0.265650) | 4.905009 / 2.268929 (2.636080) | 2.597879 / 55.444624 (-52.846745) | 2.252086 / 6.876477 (-4.624391) | 2.434439 / 2.142072 (0.292367) | 0.583071 / 4.805227 (-4.222156) | 0.133765 / 6.500664 (-6.366899) | 0.061276 / 0.075469 (-0.014193) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.403111 / 1.841788 (-0.438676) | 19.218886 / 8.074308 (11.144578) | 13.981775 / 10.191392 (3.790383) | 0.167784 / 0.680424 (-0.512640) | 0.018401 / 0.534201 (-0.515800) | 0.392038 / 0.579283 (-0.187245) | 0.414776 / 0.434364 (-0.019587) | 0.476221 / 0.540337 (-0.064117) | 0.632724 / 1.386936 (-0.754212) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#420dbd92c42840d6c91ecf5d3560c6799ee0cca1 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007595 / 0.011353 (-0.003758) | 0.004540 / 0.011008 (-0.006468) | 0.099350 / 0.038508 (0.060842) | 0.087062 / 0.023109 (0.063953) | 0.415980 / 0.275898 (0.140082) | 0.466390 / 0.323480 (0.142910) | 0.005958 / 0.007986 (-0.002027) | 0.003671 / 0.004328 (-0.000657) | 0.075714 / 0.004250 (0.071463) | 0.066062 / 0.037052 (0.029010) | 0.426527 / 0.258489 (0.168038) | 0.473282 / 0.293841 (0.179441) | 0.035669 / 0.128546 (-0.092878) | 0.009729 / 0.075646 (-0.065918) | 0.344035 / 0.419271 (-0.075237) | 0.061153 / 0.043533 (0.017620) | 0.428607 / 0.255139 (0.173468) | 0.445951 / 0.283200 (0.162752) | 0.026373 / 0.141683 (-0.115310) | 1.788725 / 1.452155 (0.336570) | 1.871055 / 1.492716 (0.378339) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.230606 / 0.018006 (0.212600) | 0.489835 / 0.000490 (0.489345) | 0.005669 / 0.000200 (0.005469) | 0.000100 / 0.000054 (0.000045) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.032197 / 0.037411 (-0.005214) | 0.099571 / 0.014526 (0.085045) | 0.112686 / 0.176557 (-0.063871) | 0.179478 / 0.737135 (-0.557658) | 0.112670 / 0.296338 (-0.183668) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.449606 / 0.215209 (0.234397) | 4.503356 / 2.077655 (2.425701) | 2.190480 / 1.504120 (0.686361) | 1.986054 / 1.541195 (0.444860) | 2.071594 / 1.468490 (0.603104) | 0.566301 / 4.584777 (-4.018475) | 4.088460 / 3.745712 (0.342748) | 4.840100 / 5.269862 (-0.429761) | 2.857697 / 4.565676 (-1.707980) | 0.066718 / 0.424275 (-0.357557) | 0.008642 / 0.007607 (0.001034) | 0.539785 / 0.226044 (0.313740) | 5.383252 / 2.268929 (3.114323) | 2.878177 / 55.444624 (-52.566447) | 2.374577 / 6.876477 (-4.501899) | 2.590500 / 2.142072 (0.448428) | 0.675196 / 4.805227 (-4.130031) | 0.153544 / 6.500664 (-6.347120) | 0.070958 / 0.075469 (-0.004511) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.490403 / 1.841788 (-0.351385) | 22.085740 / 8.074308 (14.011432) | 16.588093 / 10.191392 (6.396701) | 0.188598 / 0.680424 (-0.491826) | 0.021567 / 0.534201 (-0.512634) | 0.472594 / 0.579283 (-0.106689) | 0.472903 / 0.434364 (0.038539) | 0.545305 / 0.540337 (0.004968) | 0.736399 / 1.386936 (-0.650537) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.007635 / 0.011353 (-0.003718) | 0.004731 / 0.011008 (-0.006277) | 0.076482 / 0.038508 (0.037974) | 0.083666 / 0.023109 (0.060557) | 0.469596 / 0.275898 (0.193698) | 0.493068 / 0.323480 (0.169588) | 0.006014 / 0.007986 (-0.001971) | 0.003902 / 0.004328 (-0.000426) | 0.077142 / 0.004250 (0.072891) | 0.064355 / 0.037052 (0.027303) | 0.468859 / 0.258489 (0.210370) | 0.504002 / 0.293841 (0.210161) | 0.037606 / 0.128546 (-0.090940) | 0.010141 / 0.075646 (-0.065505) | 0.083790 / 0.419271 (-0.335482) | 0.060923 / 0.043533 (0.017390) | 0.464752 / 0.255139 (0.209613) | 0.500464 / 0.283200 (0.217264) | 0.031183 / 0.141683 (-0.110499) | 1.779294 / 1.452155 (0.327139) | 1.870848 / 1.492716 (0.378131) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.246567 / 0.018006 (0.228560) | 0.477182 / 0.000490 (0.476693) | 0.000426 / 0.000200 (0.000226) | 0.000067 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.035377 / 0.037411 (-0.002034) | 0.106042 / 0.014526 (0.091516) | 0.119237 / 0.176557 (-0.057320) | 0.182145 / 0.737135 (-0.554991) | 0.119537 / 0.296338 (-0.176801) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.491352 / 0.215209 (0.276143) | 4.824220 / 2.077655 (2.746565) | 2.652039 / 1.504120 (1.147919) | 2.535310 / 1.541195 (0.994116) | 2.620009 / 1.468490 (1.151519) | 0.567865 / 4.584777 (-4.016912) | 4.158795 / 3.745712 (0.413082) | 6.042582 / 5.269862 (0.772721) | 3.957193 / 4.565676 (-0.608484) | 0.066647 / 0.424275 (-0.357628) | 0.008893 / 0.007607 (0.001285) | 0.570137 / 0.226044 (0.344093) | 5.687126 / 2.268929 (3.418198) | 3.137605 / 55.444624 (-52.307019) | 2.655979 / 6.876477 (-4.220498) | 2.893338 / 2.142072 (0.751265) | 0.698388 / 4.805227 (-4.106840) | 0.154897 / 6.500664 (-6.345767) | 0.071208 / 0.075469 (-0.004261) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.619346 / 1.841788 (-0.222441) | 22.782510 / 8.074308 (14.708202) | 16.317395 / 10.191392 (6.126003) | 0.197630 / 0.680424 (-0.482794) | 0.021795 / 0.534201 (-0.512406) | 0.466982 / 0.579283 (-0.112302) | 0.468609 / 0.434364 (0.034245) | 0.574380 / 0.540337 (0.034043) | 0.759827 / 1.386936 (-0.627109) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#8c1c5d8268ae59a0dcaea47da825e87c3f9528b4 \"CML watermark\")\n" ]
2023-07-26T15:27:49Z
2023-07-26T16:24:43Z
2023-07-26T16:14:34Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/6080.diff", "html_url": "https://github.com/huggingface/datasets/pull/6080", "merged_at": "2023-07-26T16:14:34Z", "patch_url": "https://github.com/huggingface/datasets/pull/6080.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/6080" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6080/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3220
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3220/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3220/comments
https://api.github.com/repos/huggingface/datasets/issues/3220/events
https://github.com/huggingface/datasets/issues/3220
1,045,549,029
I_kwDODunzps4-Uc_l
3,220
Add documentation about dataset viewer feature
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
open
false
null
[]
null
[ "In particular, include this somewhere in the docs: https://huggingface.co/docs/hub/datasets-viewer#access-the-parquet-files\r\n\r\nSee https://github.com/huggingface/hub-docs/issues/563" ]
2021-11-05T08:11:19Z
2023-09-25T11:48:38Z
null
MEMBER
null
null
null
Add to the docs more details about the dataset viewer feature in the Hub. CC: @julien-c
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3220/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3220/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/1623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1623/comments
https://api.github.com/repos/huggingface/datasets/issues/1623/events
https://github.com/huggingface/datasets/pull/1623
772,950,710
MDExOlB1bGxSZXF1ZXN0NTQ0MTI2ODQ4
1,623
Add CLIMATE-FEVER dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/1658969?v=4", "events_url": "https://api.github.com/users/tdiggelm/events{/privacy}", "followers_url": "https://api.github.com/users/tdiggelm/followers", "following_url": "https://api.github.com/users/tdiggelm/following{/other_user}", "gists_url": "https://api.github.com/users/tdiggelm/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/tdiggelm", "id": 1658969, "login": "tdiggelm", "node_id": "MDQ6VXNlcjE2NTg5Njk=", "organizations_url": "https://api.github.com/users/tdiggelm/orgs", "received_events_url": "https://api.github.com/users/tdiggelm/received_events", "repos_url": "https://api.github.com/users/tdiggelm/repos", "site_admin": false, "starred_url": "https://api.github.com/users/tdiggelm/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/tdiggelm/subscriptions", "type": "User", "url": "https://api.github.com/users/tdiggelm" }
[]
closed
false
null
[]
null
[ "Thank you @lhoestq for your comments! 😄 I added your suggested changes, ran the tests and regenerated `dataset_infos.json` and `dummy_data`." ]
2020-12-22T13:34:05Z
2020-12-22T17:53:53Z
2020-12-22T17:53:53Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1623.diff", "html_url": "https://github.com/huggingface/datasets/pull/1623", "merged_at": "2020-12-22T17:53:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/1623.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1623" }
As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579. --- A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the English Wikipedia that support, refute or do not give enough information to validate the claim totalling in 7,675 claim-evidence pairs. The dataset features challenging claims that relate multiple facets and disputed cases of claims where both supporting and refuting evidence are present. More information can be found at: * Homepage: http://climatefever.ai * Paper: https://arxiv.org/abs/2012.00614
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1623/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1623/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1923
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1923/comments
https://api.github.com/repos/huggingface/datasets/issues/1923/events
https://github.com/huggingface/datasets/pull/1923
813,363,472
MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0
1,923
Fix save_to_disk with relative path
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2021-02-22T10:27:19Z
2021-02-22T11:22:44Z
2021-02-22T11:22:43Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1923.diff", "html_url": "https://github.com/huggingface/datasets/pull/1923", "merged_at": "2021-02-22T11:22:43Z", "patch_url": "https://github.com/huggingface/datasets/pull/1923.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1923" }
As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step. I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems. I also fixed the issue with the target path being the temporary path. I added a test case for relative paths as well for save_to_disk. Thanks to @M-Salti for reporting and investigating
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/1923/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1923/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2491
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2491/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2491/comments
https://api.github.com/repos/huggingface/datasets/issues/2491/events
https://github.com/huggingface/datasets/pull/2491
919,714,506
MDExOlB1bGxSZXF1ZXN0NjY4OTg5MTUw
2,491
add eduge classification dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6023883?v=4", "events_url": "https://api.github.com/users/enod/events{/privacy}", "followers_url": "https://api.github.com/users/enod/followers", "following_url": "https://api.github.com/users/enod/following{/other_user}", "gists_url": "https://api.github.com/users/enod/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/enod", "id": 6023883, "login": "enod", "node_id": "MDQ6VXNlcjYwMjM4ODM=", "organizations_url": "https://api.github.com/users/enod/orgs", "received_events_url": "https://api.github.com/users/enod/received_events", "repos_url": "https://api.github.com/users/enod/repos", "site_admin": false, "starred_url": "https://api.github.com/users/enod/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/enod/subscriptions", "type": "User", "url": "https://api.github.com/users/enod" }
[]
closed
false
null
[]
null
[ "Closing this PR as I'll submit a new one - bug free" ]
2021-06-13T04:37:01Z
2021-06-13T05:06:48Z
2021-06-13T05:06:38Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2491.diff", "html_url": "https://github.com/huggingface/datasets/pull/2491", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/2491.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2491" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2491/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2491/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3706
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3706/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3706/comments
https://api.github.com/repos/huggingface/datasets/issues/3706/events
https://github.com/huggingface/datasets/issues/3706
1,132,218,874
I_kwDODunzps5DfEn6
3,706
Unable to load dataset 'big_patent'
{ "avatar_url": "https://avatars.githubusercontent.com/u/26432753?v=4", "events_url": "https://api.github.com/users/ankitk2109/events{/privacy}", "followers_url": "https://api.github.com/users/ankitk2109/followers", "following_url": "https://api.github.com/users/ankitk2109/following{/other_user}", "gists_url": "https://api.github.com/users/ankitk2109/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ankitk2109", "id": 26432753, "login": "ankitk2109", "node_id": "MDQ6VXNlcjI2NDMyNzUz", "organizations_url": "https://api.github.com/users/ankitk2109/orgs", "received_events_url": "https://api.github.com/users/ankitk2109/received_events", "repos_url": "https://api.github.com/users/ankitk2109/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ankitk2109/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ankitk2109/subscriptions", "type": "User", "url": "https://api.github.com/users/ankitk2109" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi @ankitk2109,\r\n\r\nHave you tried passing the split name with the keyword `split=`? See e.g. an example in our Quick Start docs: https://huggingface.co/docs/datasets/quickstart.html#load-the-dataset-and-model\r\n```python\r\n ds = load_dataset(\"big_patent\", \"d\", split=\"validation\")", "Hi @albertvillanova,\r\n\r\nThanks for your response.\r\n\r\nYes, I tried the `split='validation'` as well. But getting the same issue. ", "I'm sorry, but I can't reproduce your problem:\r\n```python\r\nIn [5]: ds = load_dataset(\"big_patent\", \"d\", split=\"validation\")\r\nDownloading and preparing dataset big_patent/d (download: 6.01 GiB, generated: 169.61 MiB, post-processed: Unknown size, total: 6.17 GiB) to .../.cache/big_patent/d/1.0.0/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c...\r\nDownloading data: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6.45G/6.45G [27:36<00:00, 3.89MB/s]\r\nExtracting data files: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [03:18<00:00, 66.08s/it]\r\nDataset big_patent downloaded and prepared to .../.cache/big_patent/d/1.0.0/bdefa7c0b39fba8bba1c6331b70b738e30d63c8ad4567f983ce315a5fef6131c. Subsequent calls will reuse this data. \r\n\r\nIn [6]: ds\r\nOut[6]: \r\nDataset({\r\n features: ['description', 'abstract'],\r\n num_rows: 565\r\n})\r\n", "Maybe you had a connection issue while downloading the file and this was corrupted?\r\nOur cache system uses the file you downloaded first time.\r\nIf so, you could try forcing redownload of the file with:\r\n```python\r\nds = load_dataset(\"big_patent\", \"d\", split=\"validation\", download_mode=\"force_redownload\")", "I am able to download the dataset with ``` download_mode=\"force_redownload\"```. As you mentioned it was an issue with the cached version which was failed earlier due to a network issue. I am closing the issue now, once again thank you." ]
2022-02-11T09:48:34Z
2022-02-14T15:26:03Z
2022-02-14T15:26:03Z
NONE
null
null
null
## Describe the bug Unable to load the "big_patent" dataset ## Steps to reproduce the bug ```python load_dataset('big_patent', 'd', 'validation') ``` ## Expected results Download big_patents' validation split from the 'd' subset ## Getting an error saying: {FileNotFoundError}Local file ..\huggingface\datasets\downloads\6159313604f4f2c01e7d1cac52139343b6c07f73f6de348d09be6213478455c5\bigPatentData\train.tar.gz doesn't exist ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.18.3 - Platform: Windows - Python version:3.8 - PyArrow version:7.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3706/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3706/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3768
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3768/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3768/comments
https://api.github.com/repos/huggingface/datasets/issues/3768/events
https://github.com/huggingface/datasets/pull/3768
1,146,102,442
PR_kwDODunzps4zPobl
3,768
Fix HfFileSystem docstring
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[]
2022-02-21T18:14:40Z
2022-02-22T09:13:03Z
2022-02-22T09:13:02Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3768.diff", "html_url": "https://github.com/huggingface/datasets/pull/3768", "merged_at": "2022-02-22T09:13:02Z", "patch_url": "https://github.com/huggingface/datasets/pull/3768.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3768" }
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3768/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3768/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3323
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3323/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3323/comments
https://api.github.com/repos/huggingface/datasets/issues/3323/events
https://github.com/huggingface/datasets/pull/3323
1,064,660,452
PR_kwDODunzps4vEZwq
3,323
Fix wrongly converted assert
{ "avatar_url": "https://avatars.githubusercontent.com/u/19492473?v=4", "events_url": "https://api.github.com/users/eliasws/events{/privacy}", "followers_url": "https://api.github.com/users/eliasws/followers", "following_url": "https://api.github.com/users/eliasws/following{/other_user}", "gists_url": "https://api.github.com/users/eliasws/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/eliasws", "id": 19492473, "login": "eliasws", "node_id": "MDQ6VXNlcjE5NDkyNDcz", "organizations_url": "https://api.github.com/users/eliasws/orgs", "received_events_url": "https://api.github.com/users/eliasws/received_events", "repos_url": "https://api.github.com/users/eliasws/repos", "site_admin": false, "starred_url": "https://api.github.com/users/eliasws/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/eliasws/subscriptions", "type": "User", "url": "https://api.github.com/users/eliasws" }
[]
closed
false
null
[]
null
[ "Closes #3327 " ]
2021-11-26T16:05:39Z
2021-11-26T16:44:12Z
2021-11-26T16:44:11Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3323.diff", "html_url": "https://github.com/huggingface/datasets/pull/3323", "merged_at": "2021-11-26T16:44:11Z", "patch_url": "https://github.com/huggingface/datasets/pull/3323.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3323" }
Seems like this assertion was replaced by an exception but the condition got wrongly converted.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3323/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3323/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2174
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2174/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2174/comments
https://api.github.com/repos/huggingface/datasets/issues/2174/events
https://github.com/huggingface/datasets/pull/2174
851,383,675
MDExOlB1bGxSZXF1ZXN0NjA5ODE2OTQ2
2,174
Pin docutils for better doc
{ "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "events_url": "https://api.github.com/users/sgugger/events{/privacy}", "followers_url": "https://api.github.com/users/sgugger/followers", "following_url": "https://api.github.com/users/sgugger/following{/other_user}", "gists_url": "https://api.github.com/users/sgugger/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgugger", "id": 35901082, "login": "sgugger", "node_id": "MDQ6VXNlcjM1OTAxMDgy", "organizations_url": "https://api.github.com/users/sgugger/orgs", "received_events_url": "https://api.github.com/users/sgugger/received_events", "repos_url": "https://api.github.com/users/sgugger/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgugger/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgugger/subscriptions", "type": "User", "url": "https://api.github.com/users/sgugger" }
[]
closed
false
null
[]
null
[]
2021-04-06T12:40:20Z
2021-04-06T12:55:53Z
2021-04-06T12:55:53Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2174.diff", "html_url": "https://github.com/huggingface/datasets/pull/2174", "merged_at": "2021-04-06T12:55:53Z", "patch_url": "https://github.com/huggingface/datasets/pull/2174.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2174" }
The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted: ![image](https://user-images.githubusercontent.com/35901082/113711773-5be55280-96b3-11eb-9b3b-9794f17709aa.png) We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx). You can see the version after the change [here](https://32769-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html).
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2174/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2174/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2334
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2334/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2334/comments
https://api.github.com/repos/huggingface/datasets/issues/2334/events
https://github.com/huggingface/datasets/pull/2334
879,810,107
MDExOlB1bGxSZXF1ZXN0NjMzNTAzNTEw
2,334
Updating the DART file checksums in GEM
{ "avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4", "events_url": "https://api.github.com/users/yjernite/events{/privacy}", "followers_url": "https://api.github.com/users/yjernite/followers", "following_url": "https://api.github.com/users/yjernite/following{/other_user}", "gists_url": "https://api.github.com/users/yjernite/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/yjernite", "id": 10469459, "login": "yjernite", "node_id": "MDQ6VXNlcjEwNDY5NDU5", "organizations_url": "https://api.github.com/users/yjernite/orgs", "received_events_url": "https://api.github.com/users/yjernite/received_events", "repos_url": "https://api.github.com/users/yjernite/repos", "site_admin": false, "starred_url": "https://api.github.com/users/yjernite/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/yjernite/subscriptions", "type": "User", "url": "https://api.github.com/users/yjernite" }
[]
closed
false
null
[]
null
[ "@sebastianGehrmann " ]
2021-05-07T21:53:44Z
2021-05-07T22:18:10Z
2021-05-07T22:18:10Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2334.diff", "html_url": "https://github.com/huggingface/datasets/pull/2334", "merged_at": "2021-05-07T22:18:10Z", "patch_url": "https://github.com/huggingface/datasets/pull/2334.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2334" }
The DART files were just updated on the source GitHub https://github.com/Yale-LILY/dart/commit/34b3c872da4811523e334f1631e54ca8105dffab
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2334/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2334/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2275
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2275/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2275/comments
https://api.github.com/repos/huggingface/datasets/issues/2275/events
https://github.com/huggingface/datasets/issues/2275
869,378,311
MDU6SXNzdWU4NjkzNzgzMTE=
2,275
SNLI dataset has labels of -1
{ "avatar_url": "https://avatars.githubusercontent.com/u/17426779?v=4", "events_url": "https://api.github.com/users/puzzler10/events{/privacy}", "followers_url": "https://api.github.com/users/puzzler10/followers", "following_url": "https://api.github.com/users/puzzler10/following{/other_user}", "gists_url": "https://api.github.com/users/puzzler10/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/puzzler10", "id": 17426779, "login": "puzzler10", "node_id": "MDQ6VXNlcjE3NDI2Nzc5", "organizations_url": "https://api.github.com/users/puzzler10/orgs", "received_events_url": "https://api.github.com/users/puzzler10/received_events", "repos_url": "https://api.github.com/users/puzzler10/repos", "site_admin": false, "starred_url": "https://api.github.com/users/puzzler10/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/puzzler10/subscriptions", "type": "User", "url": "https://api.github.com/users/puzzler10" }
[]
closed
false
null
[]
null
[ "Hi @puzzler10, \r\nThose examples where `gold_label` field was empty, -1 label was alloted to it. In order to remove it you can filter the samples from train/val/test splits. Here's how you can drop those rows from the dataset:\r\n`dataset = load_dataset(\"snli\")`\r\n`dataset_test_filter = dataset['test'].filter(lambda example: example['label'] != -1)`\r\n\r\nI agree it should have been mentioned in the documentation. I'll raise a PR regarding the same. Thanks for pointing out!" ]
2021-04-28T00:32:25Z
2021-05-17T13:34:18Z
2021-05-17T13:34:18Z
NONE
null
null
null
There are a number of rows with a label of -1 in the SNLI dataset. The dataset descriptions [here](https://nlp.stanford.edu/projects/snli/) and [here](https://github.com/huggingface/datasets/tree/master/datasets/snli) don't list -1 as a label possibility, and neither does the dataset viewer. As examples, see index 107 or 124 of the test set. It isn't clear what these labels mean. I found a [line of code](https://github.com/huggingface/datasets/blob/80e59ef178d3bb2090d091bc32315c655eb0633d/datasets/snli/snli.py#L94) that seems to put them in but it seems still unclear why they are there. The current workaround is to just drop the rows from any model being trained. Perhaps the documentation should be updated.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2275/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2275/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2285
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2285/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2285/comments
https://api.github.com/repos/huggingface/datasets/issues/2285/events
https://github.com/huggingface/datasets/issues/2285
871,005,236
MDU6SXNzdWU4NzEwMDUyMzY=
2,285
Help understanding how to build a dataset for language modeling as with the old TextDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/46021411?v=4", "events_url": "https://api.github.com/users/danieldiezmallo/events{/privacy}", "followers_url": "https://api.github.com/users/danieldiezmallo/followers", "following_url": "https://api.github.com/users/danieldiezmallo/following{/other_user}", "gists_url": "https://api.github.com/users/danieldiezmallo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/danieldiezmallo", "id": 46021411, "login": "danieldiezmallo", "node_id": "MDQ6VXNlcjQ2MDIxNDEx", "organizations_url": "https://api.github.com/users/danieldiezmallo/orgs", "received_events_url": "https://api.github.com/users/danieldiezmallo/received_events", "repos_url": "https://api.github.com/users/danieldiezmallo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/danieldiezmallo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/danieldiezmallo/subscriptions", "type": "User", "url": "https://api.github.com/users/danieldiezmallo" }
[]
closed
false
null
[]
null
[ "\r\nI received an answer for this question on the HuggingFace Datasets forum by @lhoestq\r\n\r\nHi !\r\n\r\nIf you want to tokenize line by line, you can use this:\r\n\r\n```\r\nmax_seq_length = 512\r\nnum_proc = 4\r\n\r\ndef tokenize_function(examples):\r\n# Remove empty lines\r\nexamples[\"text\"] = [line for line in examples[\"text\"] if len(line) > 0 and not line.isspace()]\r\nreturn tokenizer(\r\n examples[\"text\"],\r\n truncation=True,\r\n max_length=max_seq_length,\r\n)\r\n\r\ntokenized_dataset = dataset.map(\r\ntokenize_function,\r\nbatched=True,\r\nnum_proc=num_proc,\r\nremove_columns=[\"text\"],\r\n)\r\n```\r\n\r\nThough the TextDataset was doing a different processing by concatenating all the texts and building blocks of size 512. If you need this behavior, then you must apply an additional map function after the tokenization:\r\n\r\n```\r\n# Main data processing function that will concatenate all texts from\r\n# our dataset and generate chunks of max_seq_length.\r\ndef group_texts(examples):\r\n# Concatenate all texts.\r\nconcatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}\r\ntotal_length = len(concatenated_examples[list(examples.keys())[0]])\r\n# We drop the small remainder, we could add padding if the model supported it instead of this drop,\r\n# you can customize this part to your needs.\r\ntotal_length = (total_length // max_seq_length) * max_seq_length\r\n# Split by chunks of max_len.\r\nresult = {\r\n k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]\r\n for k, t in concatenated_examples.items()\r\n}\r\nreturn result\r\n\r\n# Note that with `batched=True`, this map processes 1,000 texts together,\r\n# so group_texts throws away a remainder for each of those groups of 1,000 texts.\r\n# You can adjust that batch_size here but a higher value might be slower to preprocess.\r\n\r\ntokenized_dataset = tokenized_dataset.map(\r\ngroup_texts,\r\nbatched=True,\r\nnum_proc=num_proc,\r\n)\r\n```\r\n\r\nThis code comes from the processing of the run_mlm.py example script of transformers\r\n\r\n", "Resolved" ]
2021-04-29T13:16:45Z
2021-05-19T07:22:45Z
2021-05-19T07:22:39Z
NONE
null
null
null
Hello, I am trying to load a custom dataset that I will then use for language modeling. The dataset consists of a text file that has a whole document in each line, meaning that each line overpasses the normal 512 tokens limit of most tokenizers. I would like to understand what is the process to build a text dataset that tokenizes each line, having previously split the documents in the dataset into lines of a "tokenizable" size, as the old TextDataset class would do, where you only had to do the following, and a tokenized dataset without text loss would be available to pass to a DataCollator: ``` model_checkpoint = 'distilbert-base-uncased' from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) from transformers import TextDataset dataset = TextDataset( tokenizer=tokenizer, file_path="path/to/text_file.txt", block_size=512, ) ``` For now, what I have is the following, which, of course, throws an error because each line is longer than the maximum block size in the tokenizer: ``` import datasets dataset = datasets.load_dataset('path/to/text_file.txt') model_checkpoint = 'distilbert-base-uncased' tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) def tokenize_function(examples): return tokenizer(examples["text"]) tokenized_datasets = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) tokenized_datasets ``` So what would be the "standard" way of creating a dataset in the way it was done before? Thank you very much for the help :))
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2285/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2285/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/860
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/860/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/860/comments
https://api.github.com/repos/huggingface/datasets/issues/860/events
https://github.com/huggingface/datasets/issues/860
744,750,691
MDU6SXNzdWU3NDQ3NTA2OTE=
860
wmt16 cs-en does not donwload
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "We know host this file, so downloading should be more robust." ]
2020-11-17T13:45:35Z
2022-10-05T12:27:00Z
2022-10-05T12:26:59Z
CONTRIBUTOR
null
null
null
Hi I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks split="train", n_obs=data_args.n_train) for task in data_args.task} File "finetune_t5_trainer.py", line 109, in <dictcomp> split="train", n_obs=data_args.n_train) for task in data_args.task} File "/home/rabeeh/internship/seq2seq/tasks/tasks.py", line 82, in get_dataset dataset = load_dataset("wmt16", self.pair, split=split) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset ignore_verifications=ignore_verifications, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 476, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/builder.py", line 531, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/rabeeh/.cache/huggingface/modules/datasets_modules/datasets/wmt16/7b2c4443a7d34c2e13df267eaa8cab4c62dd82f6b62b0d9ecc2e3a673ce17308/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 254, in download_and_extract return self.extract(self.download(url_or_urls)) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/download_manager.py", line 179, in download num_proc=download_config.num_proc, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 225, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 181, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 163, in _single_map_nested return function(data_struct) File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/opt/conda/envs/internship/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 475, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach http://www.statmt.org/wmt13/training-parallel-commoncrawl.tgz
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/860/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/860/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/332
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/332/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/332/comments
https://api.github.com/repos/huggingface/datasets/issues/332/events
https://github.com/huggingface/datasets/pull/332
649,140,135
MDExOlB1bGxSZXF1ZXN0NDQyODMwMzMz
332
Add wiki_dpr
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[]
closed
false
null
[]
null
[ "The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.", "It's ok to merge now imo. I'll make another PR if we find a way to have the missing embeddings" ]
2020-07-01T17:12:00Z
2020-07-06T12:21:17Z
2020-07-06T12:21:16Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/332.diff", "html_url": "https://github.com/huggingface/datasets/pull/332", "merged_at": "2020-07-06T12:21:16Z", "patch_url": "https://github.com/huggingface/datasets/pull/332.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/332" }
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73GB vs 14GB) - I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing) - I added the case for lists of urls as input of the download_manager
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/332/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/332/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4115/comments
https://api.github.com/repos/huggingface/datasets/issues/4115/events
https://github.com/huggingface/datasets/issues/4115
1,194,907,555
I_kwDODunzps5HONej
4,115
ImageFolder add option to ignore some folders like '.ipynb_checkpoints'
{ "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "events_url": "https://api.github.com/users/cceyda/events{/privacy}", "followers_url": "https://api.github.com/users/cceyda/followers", "following_url": "https://api.github.com/users/cceyda/following{/other_user}", "gists_url": "https://api.github.com/users/cceyda/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cceyda", "id": 15624271, "login": "cceyda", "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "organizations_url": "https://api.github.com/users/cceyda/orgs", "received_events_url": "https://api.github.com/users/cceyda/received_events", "repos_url": "https://api.github.com/users/cceyda/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cceyda/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cceyda/subscriptions", "type": "User", "url": "https://api.github.com/users/cceyda" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Maybe it would be nice to ignore private dirs like this one (ones starting with `.`) by default. \r\n\r\nCC @mariosasko ", "Maybe we can add a `ignore_hidden_files` flag to the builder configs of our packaged loaders (to be consistent across all of them), wdyt @lhoestq @albertvillanova? ", "I think they should always ignore them actually ! Not sure if adding a flag would be helpful", "@lhoestq But what if the user explicitly requests those files via regex?\r\n\r\n`glob.glob` ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's `glob` doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?", "> @lhoestq But what if the user explicitly requests those files via regex?\r\n\r\nUsually hidden files are meant to be ignored. If they are data files, they must be placed outside a hidden directory in the first place right ? I think it's more sensible to explain this than adding a flag.\r\n\r\n> glob.glob ignores hidden files (files starting with \".\") by default unless they are explicitly requested, but fsspec's glob doesn't follow this behavior, which is probably a bug, so maybe we can raise an issue or open a PR in their repo?\r\n\r\nAfter globbing using `fsspec`, we already ignore files that start with a `.` in `_resolve_single_pattern_locally` and `_resolve_single_pattern_in_dataset_repository`, I guess we can just account for parent directories as well ?\r\n\r\nWe could open an issue on `fsspec` but I think they won't change this since it's an important breaking change for them." ]
2022-04-06T17:29:43Z
2022-06-01T13:04:16Z
2022-06-01T13:04:16Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** I sometimes like to peek at the dataset images from jupyterlab. thus '.ipynb_checkpoints' folder appears where my dataset is and (just realized) leads to accidental duplicate image additions. I think this is an easy enough thing to miss especially if the dataset is very large. **Describe the solution you'd like** maybe have an option `ignore` or something .gitignore style `dataset = load_dataset("imagefolder", data_dir="./data/original", ignore="regex?")` **Describe alternatives you've considered** Could filter out manually
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4115/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/67
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/67/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/67/comments
https://api.github.com/repos/huggingface/datasets/issues/67/events
https://github.com/huggingface/datasets/pull/67
614,798,483
MDExOlB1bGxSZXF1ZXN0NDE1Mjc5NjI0
67
[Tests] Test files locally
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten" }
[]
closed
false
null
[]
null
[ "Super nice, good job @patrickvonplaten!" ]
2020-05-08T15:02:43Z
2020-05-08T19:50:47Z
2020-05-08T15:17:00Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/67.diff", "html_url": "https://github.com/huggingface/datasets/pull/67", "merged_at": "2020-05-08T15:17:00Z", "patch_url": "https://github.com/huggingface/datasets/pull/67.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/67" }
This PR adds a `aws` and a `local` decorator to the tests so that tests now run on the local datasets. By default, the `aws` is deactivated and `local` is activated and `slow` is deactivated, so that only 1 test per dataset runs on circle ci. **When local is activated all folders in `./datasets` are tested.** **Important** When adding a dataset, we should no longer upload it to AWS. The steps are: 1. Open a PR 2. Add a dataset as described in `datasets/README.md` 3. If all tests pass, push to master Currently we have 49 functional datasets in our code base. We have 6 datasets "under-construction" that don't pass the tests - so I put them in a folder "datasets_under_construction" - it would be nice to open a PR to fix them and put them in the `datasets` folder. **Important** when running tests locally, the datasets are cached so to rerun them delete your local cache via: `rm -r ~/.cache/huggingface/datasets/*` @thomwolf @mariamabarham @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/67/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/67/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4764
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4764/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4764/comments
https://api.github.com/repos/huggingface/datasets/issues/4764/events
https://github.com/huggingface/datasets/pull/4764
1,321,295,961
PR_kwDODunzps48RMLu
4,764
Update CI badge
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-07-28T18:04:20Z
2022-07-29T11:36:37Z
2022-07-29T11:23:51Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4764.diff", "html_url": "https://github.com/huggingface/datasets/pull/4764", "merged_at": "2022-07-29T11:23:51Z", "patch_url": "https://github.com/huggingface/datasets/pull/4764.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4764" }
Replace the old CircleCI badge with a new one for GH Actions.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4764/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4764/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4573/comments
https://api.github.com/repos/huggingface/datasets/issues/4573/events
https://github.com/huggingface/datasets/pull/4573
1,285,023,629
PR_kwDODunzps46YEEa
4,573
Fix evaluation metadata for ncbi_disease
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "0e8a16", "default": false, "description": "Contribution to a dataset script", "id": 4564477500, "name": "dataset contribution", "node_id": "LA_kwDODunzps8AAAABEBBmPA", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution" } ]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "As discussed with @lewtun, we are closing this PR, because it requires first the task names to be aligned between AutoTrain and datasets." ]
2022-06-26T20:29:32Z
2023-09-24T09:35:07Z
2022-09-23T09:38:02Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4573.diff", "html_url": "https://github.com/huggingface/datasets/pull/4573", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/4573.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4573" }
This PR fixes the task in the evaluation metadata and removes the metrics info as we've decided this is not a great way to propagate this information downstream.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4573/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4573/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/1699
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1699/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1699/comments
https://api.github.com/repos/huggingface/datasets/issues/1699/events
https://github.com/huggingface/datasets/pull/1699
781,271,558
MDExOlB1bGxSZXF1ZXN0NTUxMDIzODE5
1,699
Update DBRD dataset card and download URL
{ "avatar_url": "https://avatars.githubusercontent.com/u/8875786?v=4", "events_url": "https://api.github.com/users/benjaminvdb/events{/privacy}", "followers_url": "https://api.github.com/users/benjaminvdb/followers", "following_url": "https://api.github.com/users/benjaminvdb/following{/other_user}", "gists_url": "https://api.github.com/users/benjaminvdb/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benjaminvdb", "id": 8875786, "login": "benjaminvdb", "node_id": "MDQ6VXNlcjg4NzU3ODY=", "organizations_url": "https://api.github.com/users/benjaminvdb/orgs", "received_events_url": "https://api.github.com/users/benjaminvdb/received_events", "repos_url": "https://api.github.com/users/benjaminvdb/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benjaminvdb/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benjaminvdb/subscriptions", "type": "User", "url": "https://api.github.com/users/benjaminvdb" }
[]
closed
false
null
[]
null
[ "not sure why the CI was not triggered though" ]
2021-01-07T12:16:43Z
2021-01-07T13:41:39Z
2021-01-07T13:40:59Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1699.diff", "html_url": "https://github.com/huggingface/datasets/pull/1699", "merged_at": "2021-01-07T13:40:59Z", "patch_url": "https://github.com/huggingface/datasets/pull/1699.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1699" }
I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes: 1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316. 2. I've updated the dataset card. Cheers! 😄
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1699/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1699/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5284
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5284/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5284/comments
https://api.github.com/repos/huggingface/datasets/issues/5284/events
https://github.com/huggingface/datasets/issues/5284
1,461,519,733
I_kwDODunzps5XHQV1
5,284
Features of IterableDataset set to None by remove column
{ "avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4", "events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}", "followers_url": "https://api.github.com/users/sanchit-gandhi/followers", "following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}", "gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sanchit-gandhi", "id": 93869735, "login": "sanchit-gandhi", "node_id": "U_kgDOBZhWpw", "organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs", "received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events", "repos_url": "https://api.github.com/users/sanchit-gandhi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions", "type": "User", "url": "https://api.github.com/users/sanchit-gandhi" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" } ]
null
[ "Related to https://github.com/huggingface/datasets/issues/5245", "#self-assign", "Thanks @lhoestq and @alvarobartt!\r\n\r\nThis would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\r\n\r\n_c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377", "> Thanks @lhoestq and @alvarobartt!\n> \n> \n> \n> This would be extremely helpful to have working for the Whisper fine-tuning event - we're **only** training using streaming mode, so it'll be quite important to have this feature working to make training as easy as possible!\n> \n> \n> \n> _c.f._ https://twitter.com/sanchitgandhi99/status/1592188332171493377\n\nI'm almost done with at least a temporary fix to `rename_column`, `rename_columns`, and `remove_columns`, just trying to figure out how to extend it to the `map` function itself!\n\nI'll probably open the PR for review either tomorrow or Sunday hopefully! Glad I can help you and HuggingFace 🤗 ", "Awesome - thank you so much for this PR @alvarobartt! Is much appreciated!", "@sanchit-gandhi PR is ready and open for review at #5287, but there's still one issue I may need @lhoestq's input :hugs:", "Let us know @sanchit-gandhi if you need a new release of `datasets` soon with this fix included :)", "Thanks for the fix guys! We can direct people to install `datasets` from main if that's easier!", "Hey guys, any update around this? I'm facing the same issue with a streamable dataset. ", "Hi @asennoussi so this was already fixed and released as part of https://github.com/huggingface/datasets/releases/tag/2.8.0, so you should be able to install it as `pip install datasets==2.8.0` or just to use `pip install datasets --upgrade` to get the latest version, as of now, the https://github.com/huggingface/datasets/releases/tag/2.9.0 released last week! 🤗", "Still facing the same issue though: \r\n```\r\nfrom datasets import IterableDatasetDict, load_dataset\r\n\r\nraw_datasets = vectorized_datasets = IterableDatasetDict()\r\n\r\n\r\nraw_datasets[\"train\"] = load_dataset(\"asennoussi/private\", split=\"train\", use_auth_token=True, streaming=True)\r\nraw_datasets[\"test\"] = load_dataset(\"asennoussi/private\", split=\"test\", use_auth_token=True, streaming=True)\r\n\r\nprint(\"Original features: \", raw_datasets['train'].features.keys())\r\n\r\n...\r\n\r\ndef prepare_dataset(batch):\r\n\r\n # load and (possibly) resample audio datato 16kHz\r\n audio = batch[\"audio\"]\r\n\r\n # compute log-Mel input features from input audio array \r\n batch[\"input_features\"] = processor.feature_extractor(audio[\"array\"], sampling_rate=audio[\"sampling_rate\"]).input_features[0]\r\n # compute input length of audio sample in seconds\r\n batch[\"input_length\"] = len(audio[\"array\"]) / audio[\"sampling_rate\"]\r\n \r\n # optional pre-processing steps\r\n transcription = batch[\"sentence\"]\r\n \r\n # encode target text to label ids\r\n batch[\"labels\"] = processor.tokenizer(transcription).input_ids\r\n batch[\"labels_length\"] = len(batch[\"labels\"])\r\n return batch\r\n...\r\nvectorized_datasets = vectorized_datasets.remove_columns(['input_length', 'labels_length']+list(next(iter(raw_datasets.values())).features))\r\nprint(\"Processed features: \", vectorized_datasets['train'].features)\r\nprint(\"First sample:\", next(iter(vectorized_datasets['train'])))\r\n\r\n```\r\n\r\nOutput: \r\n```\r\nOriginal features: dict_keys(['path', 'audio', 'sentence'])\r\nProcessed features: None\r\n```", "Hmm weird, could you try to print\r\n\r\n```python\r\nprint(\"Processed features: \", vectorized_datasets['train'].features)\r\n```\r\n\r\nagain after iterating over the `vectorized_datasets`? In the code above, should be last line :)", "Didn't seem to fix it: \r\n```\r\nOriginal features: dict_keys(['path', 'audio', 'sentence'])\r\nProcessed features: None\r\nProcessed features: None\r\n```", "Actually the culprit looks to be this one: \r\n`vectorized_datasets = raw_datasets.map(prepare_dataset).with_format(\"torch\")`\r\nWhen I remove this line: `vectorized_datasets = vectorized_datasets.remove_columns(['input_length', 'labels_length']+list(next(iter(raw_datasets.values())).features))`\r\n\r\nI still get \r\n```\r\nProcessed features: None\r\n```", "The culprit is definitely `.map` \r\nJust validated it. \r\nAny idea please? ", "> The culprit is definitely `.map` Just validated it. Any idea please?\r\n\r\nYes, indeed `.map` losses the features, because AFAIK pre-fetching the data to infer the features is expensive and not ideal, that's part of this issue https://github.com/huggingface/datasets/issues/3888\r\n\r\nAnyway, now you can pass the `features` as a param to `.map` as follows:\r\n\r\n```python\r\nfrom datasets import Features\r\nvectorized_datasets = raw_datasets.map(\r\n prepare_dataset,\r\n features=Features(\r\n {\"path\": raw_datasets[\"train\"].info.features[\"path\"], \"audio\": raw_datasets[\"train\"].info.features[\"audio\"], \"sentence\": raw_datasets[\"train\"].info.features[\"sentence\"]}\r\n ),\r\n).with_format(\"torch\")\r\n```\r\n\r\nAlso, to let you know, when calling `.remove_columns` over an `IterableDataset`, the `features` are not lost, as well as `.rename_column` and `rename_columns` :)\r\n\r\nMore information about the latter at https://github.com/huggingface/datasets/pull/5287", "@asennoussi alternatively you can just call `._resolve_features()` from your `IterableDataset` and it will pre-fetch the data to resolve the features, but note that feature-inference is not as accurate as if you manually specify which features and feature-types the `IterableDataset` has, as mentioned in the comment above, the alternative is to provide `features` param to `.map` :hugs:", "Got it thanks a lot! " ]
2022-11-23T10:54:59Z
2023-02-02T09:05:51Z
2022-11-28T12:53:24Z
CONTRIBUTOR
null
null
null
### Describe the bug The `remove_column` method of the IterableDataset sets the dataset features to None. ### Steps to reproduce the bug ```python from datasets import Audio, load_dataset # load LS in streaming mode dataset = load_dataset("librispeech_asr", "clean", split="validation", streaming=True) # check original features print("Original features: ", dataset.features.keys()) # define features to remove: we KEEP audio and text COLUMNS_TO_REMOVE = ['chapter_id', 'speaker_id', 'file', 'id'] dataset = dataset.remove_columns(COLUMNS_TO_REMOVE) # check processed features, uh-oh! print("Processed features: ", dataset.features) # streaming the first audio sample still works print("First sample:", next(iter(ds))) ``` **Print Output:** ``` Original features: dict_keys(['file', 'audio', 'text', 'speaker_id', 'chapter_id', 'id']) Processed features: None First sample: {'audio': {'path': '2277-149896-0000.flac', 'array': array([ 0.00186157, 0.0005188 , 0.00024414, ..., -0.00097656, -0.00109863, -0.00146484]), 'sampling_rate': 16000}, 'text': "HE WAS IN A FEVERED STATE OF MIND OWING TO THE BLIGHT HIS WIFE'S ACTION THREATENED TO CAST UPON HIS ENTIRE FUTURE"} ``` ### Expected behavior The features should be those **not** removed by the `remove_column` method, i.e. audio and text. ### Environment info - `datasets` version: 2.7.1 - Platform: Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.15 - PyArrow version: 9.0.0 - Pandas version: 1.3.5 (Running on Google Colab for a blog post: https://colab.research.google.com/drive/1ySCQREPZEl4msLfxb79pYYOWjUZhkr9y#scrollTo=8pRDGiVmH2ml) cc @polinaeterna @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5284/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5284/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/3449
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3449/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3449/comments
https://api.github.com/repos/huggingface/datasets/issues/3449/events
https://github.com/huggingface/datasets/issues/3449
1,083,373,018
I_kwDODunzps5AkvXa
3,449
Add `__add__()`, `__iadd__()` and similar to `Dataset` class
{ "avatar_url": "https://avatars.githubusercontent.com/u/8904453?v=4", "events_url": "https://api.github.com/users/sgraaf/events{/privacy}", "followers_url": "https://api.github.com/users/sgraaf/followers", "following_url": "https://api.github.com/users/sgraaf/following{/other_user}", "gists_url": "https://api.github.com/users/sgraaf/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sgraaf", "id": 8904453, "login": "sgraaf", "node_id": "MDQ6VXNlcjg5MDQ0NTM=", "organizations_url": "https://api.github.com/users/sgraaf/orgs", "received_events_url": "https://api.github.com/users/sgraaf/received_events", "repos_url": "https://api.github.com/users/sgraaf/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sgraaf/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sgraaf/subscriptions", "type": "User", "url": "https://api.github.com/users/sgraaf" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": false, "description": "Generic discussion on the library", "id": 2067400324, "name": "generic discussion", "node_id": "MDU6TGFiZWwyMDY3NDAwMzI0", "url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion" } ]
closed
false
null
[]
null
[ "I was going through the codebase, and I believe the implementation of __add__() and __iadd__() will be similar to concatenate_datasets() after the elimination of code for arguments other than the list of datasets (info, split, axis). \r\n(Assuming elimination of axis means concatenating over axis 1.)", "Most data frame libraries (Polars, Pandas, ...) override `__add__` to perform (mathematical) summation, so having different behavior here is a bad idea." ]
2021-12-17T15:29:11Z
2023-07-25T15:33:57Z
2023-07-25T15:33:56Z
NONE
null
null
null
**Is your feature request related to a problem? Please describe.** No. **Describe the solution you'd like** I would like to be able to concatenate datasets as follows: ```python >>> dataset["train"] += dataset["validation"] ``` ... instead of using `concatenate_datasets()`: ```python >>> raw_datasets["train"] = concatenate_datasets([raw_datasets["train"], raw_datasets["validation"]]) >>> del raw_datasets["validation"] ``` **Describe alternatives you've considered** Well, I have considered `concatenate_datasets()` 😀 **Additional context** N.a.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3449/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3449/timeline
null
not_planned
false
https://api.github.com/repos/huggingface/datasets/issues/3191
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3191/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3191/comments
https://api.github.com/repos/huggingface/datasets/issues/3191/events
https://github.com/huggingface/datasets/issues/3191
1,041,225,111
I_kwDODunzps4-D9WX
3,191
Dataset viewer issue for '*compguesswhat*'
{ "avatar_url": "https://avatars.githubusercontent.com/u/2545336?v=4", "events_url": "https://api.github.com/users/benotti/events{/privacy}", "followers_url": "https://api.github.com/users/benotti/followers", "following_url": "https://api.github.com/users/benotti/following{/other_user}", "gists_url": "https://api.github.com/users/benotti/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/benotti", "id": 2545336, "login": "benotti", "node_id": "MDQ6VXNlcjI1NDUzMzY=", "organizations_url": "https://api.github.com/users/benotti/orgs", "received_events_url": "https://api.github.com/users/benotti/received_events", "repos_url": "https://api.github.com/users/benotti/repos", "site_admin": false, "starred_url": "https://api.github.com/users/benotti/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/benotti/subscriptions", "type": "User", "url": "https://api.github.com/users/benotti" }
[ { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "```python\r\n>>> import datasets\r\n>>> dataset = datasets.load_dataset('compguesswhat', name='compguesswhat-original',split='train', streaming=True)\r\n>>> next(iter(dataset))\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 497, in __iter__\r\n for key, example in self._iter():\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 494, in _iter\r\n yield from ex_iterable\r\n File \"/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py\", line 87, in __iter__\r\n yield from self.generate_examples_fn(**self.kwargs)\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/compguesswhat/4d08b9e0a8d1cf036c9626c93be4a759fdd9fcce050ea503ea14b075e830c799/compguesswhat.py\", line 251, in _generate_examples\r\n with gzip.open(filepath) as in_file:\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 58, in open\r\n binary_file = GzipFile(filename, gz_mode, compresslevel)\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/gzip.py\", line 173, in __init__\r\n fileobj = self.myfileobj = builtins.open(filename, mode or 'rb')\r\nFileNotFoundError: [Errno 2] No such file or directory: 'zip://compguesswhat-original/0.2.0/compguesswhat.train.jsonl.gz::https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1'\r\n```\r\n\r\nIt's an issue with the streaming mode. Note that normal mode is used by the dataset viewer when streaming is failing, but only for the smallest datasets. This dataset is above the limit, hence the error.\r\n\r\nSame case as https://github.com/huggingface/datasets/issues/3186#issuecomment-1096549774.", "cc @huggingface/datasets ", "There is an issue with the URLs of their data files: https://www.dropbox.com/s/l0nc13udml6vs0w/compguesswhat-original.zip?dl=1\r\n> Dropbox Error: That didn't work for some reason\r\n\r\nError reported to their repo:\r\n- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1", "Closed by:\r\n- #4968" ]
2021-11-01T14:16:49Z
2022-09-12T08:02:29Z
2022-09-12T08:02:29Z
NONE
null
null
null
## Dataset viewer issue for '*compguesswhat*' **Link:** https://huggingface.co/datasets/compguesswhat File not found Am I the one who added this dataset ? No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3191/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3191/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1085
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1085/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1085/comments
https://api.github.com/repos/huggingface/datasets/issues/1085/events
https://github.com/huggingface/datasets/pull/1085
756,704,563
MDExOlB1bGxSZXF1ZXN0NTMyMjExNTA4
1,085
add mutual friends conversational dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh" }
[]
closed
false
null
[]
null
[ "Ready for review" ]
2020-12-04T00:48:21Z
2020-12-16T15:58:31Z
2020-12-16T15:58:30Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1085.diff", "html_url": "https://github.com/huggingface/datasets/pull/1085", "merged_at": "2020-12-16T15:58:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/1085.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1085" }
Mutual friends dataset WIP TODO: - scenario_kbs (bug with pyarrow conversion) - download from codalab checksums bug
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1085/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1085/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2859
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2859/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2859/comments
https://api.github.com/repos/huggingface/datasets/issues/2859/events
https://github.com/huggingface/datasets/issues/2859
984,324,500
MDU6SXNzdWU5ODQzMjQ1MDA=
2,859
Loading allenai/c4 in streaming mode does too many HEAD requests
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/src/datasets/builder.py#L179-L186", "Thanks a lot!!!" ]
2021-08-31T21:11:04Z
2021-10-12T07:35:52Z
2021-10-11T11:05:51Z
MEMBER
null
null
null
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). Instead of using the ETags, we could simply use the commit hash of the dataset repository on the hub, as well and the glob pattern used to resolve the files (here it's `*` by default, to load all the files of the repository)
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2859/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2859/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/1067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1067/comments
https://api.github.com/repos/huggingface/datasets/issues/1067/events
https://github.com/huggingface/datasets/pull/1067
756,414,212
MDExOlB1bGxSZXF1ZXN0NTMxOTYyNDYx
1,067
add xquad-r dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/6687858?v=4", "events_url": "https://api.github.com/users/manandey/events{/privacy}", "followers_url": "https://api.github.com/users/manandey/followers", "following_url": "https://api.github.com/users/manandey/following{/other_user}", "gists_url": "https://api.github.com/users/manandey/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/manandey", "id": 6687858, "login": "manandey", "node_id": "MDQ6VXNlcjY2ODc4NTg=", "organizations_url": "https://api.github.com/users/manandey/orgs", "received_events_url": "https://api.github.com/users/manandey/received_events", "repos_url": "https://api.github.com/users/manandey/repos", "site_admin": false, "starred_url": "https://api.github.com/users/manandey/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/manandey/subscriptions", "type": "User", "url": "https://api.github.com/users/manandey" }
[]
closed
false
null
[]
null
[]
2020-12-03T17:50:01Z
2020-12-03T17:53:21Z
2020-12-03T17:53:15Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/1067.diff", "html_url": "https://github.com/huggingface/datasets/pull/1067", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/1067.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/1067" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1067/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1067/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3230
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3230/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3230/comments
https://api.github.com/repos/huggingface/datasets/issues/3230/events
https://github.com/huggingface/datasets/pull/3230
1,047,135,583
PR_kwDODunzps4uNfEd
3,230
Add full tagset to conll2003 README
{ "avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4", "events_url": "https://api.github.com/users/BramVanroy/events{/privacy}", "followers_url": "https://api.github.com/users/BramVanroy/followers", "following_url": "https://api.github.com/users/BramVanroy/following{/other_user}", "gists_url": "https://api.github.com/users/BramVanroy/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/BramVanroy", "id": 2779410, "login": "BramVanroy", "node_id": "MDQ6VXNlcjI3Nzk0MTA=", "organizations_url": "https://api.github.com/users/BramVanroy/orgs", "received_events_url": "https://api.github.com/users/BramVanroy/received_events", "repos_url": "https://api.github.com/users/BramVanroy/repos", "site_admin": false, "starred_url": "https://api.github.com/users/BramVanroy/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/BramVanroy/subscriptions", "type": "User", "url": "https://api.github.com/users/BramVanroy" }
[]
closed
false
null
[]
null
[ "I also added the missing `pretty_name` tag in the dataset card to fix the CI" ]
2021-11-08T08:06:04Z
2021-11-09T10:48:38Z
2021-11-09T10:40:58Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3230.diff", "html_url": "https://github.com/huggingface/datasets/pull/3230", "merged_at": "2021-11-09T10:40:58Z", "patch_url": "https://github.com/huggingface/datasets/pull/3230.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3230" }
Even though it is possible to manually get the tagset list with ```python dset.features[field_name].feature.names ``` I think it is useful to have an overview of the used tagset on the dataset card. This is particularly useful in light of the **dataset viewer**: the tags are encoded, so it is not immediately obvious what they are for a given sample. Adding a label-int mapping should make it easier for visitors to get a grasp of what they mean. From user-experience perspective, I would urge the full tagsets to always be available in the README's but I understand that that would take a lot of work, probably. Perhaps it can be automated? closes #3189
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/3230/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3230/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/2974
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2974/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2974/comments
https://api.github.com/repos/huggingface/datasets/issues/2974/events
https://github.com/huggingface/datasets/pull/2974
1,008,247,787
PR_kwDODunzps4sUZCX
2,974
Actually disable dummy labels by default
{ "avatar_url": "https://avatars.githubusercontent.com/u/12866554?v=4", "events_url": "https://api.github.com/users/Rocketknight1/events{/privacy}", "followers_url": "https://api.github.com/users/Rocketknight1/followers", "following_url": "https://api.github.com/users/Rocketknight1/following{/other_user}", "gists_url": "https://api.github.com/users/Rocketknight1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Rocketknight1", "id": 12866554, "login": "Rocketknight1", "node_id": "MDQ6VXNlcjEyODY2NTU0", "organizations_url": "https://api.github.com/users/Rocketknight1/orgs", "received_events_url": "https://api.github.com/users/Rocketknight1/received_events", "repos_url": "https://api.github.com/users/Rocketknight1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Rocketknight1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Rocketknight1/subscriptions", "type": "User", "url": "https://api.github.com/users/Rocketknight1" }
[]
closed
false
null
[]
null
[]
2021-09-27T14:50:20Z
2021-09-29T09:04:42Z
2021-09-29T09:04:41Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2974.diff", "html_url": "https://github.com/huggingface/datasets/pull/2974", "merged_at": "2021-09-29T09:04:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/2974.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2974" }
So I might have just changed the docstring instead of the actual default argument value and not realized. @lhoestq I'm sorry >.>
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2974/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2974/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4711
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4711/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4711/comments
https://api.github.com/repos/huggingface/datasets/issues/4711/events
https://github.com/huggingface/datasets/issues/4711
1,309,138,570
I_kwDODunzps5OB96K
4,711
Document how to create a dataset loading script for audio/vision
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "I'm closing this issue as both the Audio and Image sections now have a \"Create dataset\" page that contains the info about writing the loading script version of a dataset." ]
2022-07-19T08:03:40Z
2023-07-25T16:07:52Z
2023-07-25T16:07:52Z
MEMBER
null
null
null
Currently, in our docs for Audio/Vision/Text, we explain how to: - Load data - Process data However we only explain how to *Create a dataset loading script* for text data. I think it would be useful that we add the same for Audio/Vision as these have some specificities different from Text. See, for example: - #4697 - and comment there: https://github.com/huggingface/datasets/issues/4697#issuecomment-1191502492 CC: @stevhliu
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/4711/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4711/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5266
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5266/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5266/comments
https://api.github.com/repos/huggingface/datasets/issues/5266/events
https://github.com/huggingface/datasets/pull/5266
1,455,281,310
PR_kwDODunzps5DN9BT
5,266
Specify arguments as keywords in librosa.reshape to avoid future errors
{ "avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4", "events_url": "https://api.github.com/users/polinaeterna/events{/privacy}", "followers_url": "https://api.github.com/users/polinaeterna/followers", "following_url": "https://api.github.com/users/polinaeterna/following{/other_user}", "gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/polinaeterna", "id": 16348744, "login": "polinaeterna", "node_id": "MDQ6VXNlcjE2MzQ4NzQ0", "organizations_url": "https://api.github.com/users/polinaeterna/orgs", "received_events_url": "https://api.github.com/users/polinaeterna/received_events", "repos_url": "https://api.github.com/users/polinaeterna/repos", "site_admin": false, "starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions", "type": "User", "url": "https://api.github.com/users/polinaeterna" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-11-18T14:58:47Z
2022-11-21T15:45:02Z
2022-11-21T15:41:57Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5266.diff", "html_url": "https://github.com/huggingface/datasets/pull/5266", "merged_at": "2022-11-21T15:41:57Z", "patch_url": "https://github.com/huggingface/datasets/pull/5266.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5266" }
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5266/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5266/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/671
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/671/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/671/comments
https://api.github.com/repos/huggingface/datasets/issues/671/events
https://github.com/huggingface/datasets/issues/671
709,093,151
MDU6SXNzdWU3MDkwOTMxNTE=
671
[BUG] No such file or directory
{ "avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4", "events_url": "https://api.github.com/users/jbragg/events{/privacy}", "followers_url": "https://api.github.com/users/jbragg/followers", "following_url": "https://api.github.com/users/jbragg/following{/other_user}", "gists_url": "https://api.github.com/users/jbragg/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jbragg", "id": 2238344, "login": "jbragg", "node_id": "MDQ6VXNlcjIyMzgzNDQ=", "organizations_url": "https://api.github.com/users/jbragg/orgs", "received_events_url": "https://api.github.com/users/jbragg/received_events", "repos_url": "https://api.github.com/users/jbragg/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jbragg/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jbragg/subscriptions", "type": "User", "url": "https://api.github.com/users/jbragg" }
[]
closed
false
null
[]
null
[]
2020-09-25T16:38:54Z
2020-09-28T14:42:42Z
2020-09-28T14:42:42Z
CONTRIBUTOR
null
null
null
This happens when both 1. Huggingface datasets cache dir does not exist 2. Try to load a local dataset script builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177 Tested on v1.0.2 @lhoestq
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/671/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/671/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/5585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
https://api.github.com/repos/huggingface/datasets/issues/5585/events
https://github.com/huggingface/datasets/issues/5585
1,602,190,030
I_kwDODunzps5ff3rO
5,585
Cache is not transportable
{ "avatar_url": "https://avatars.githubusercontent.com/u/4443482?v=4", "events_url": "https://api.github.com/users/davidgilbertson/events{/privacy}", "followers_url": "https://api.github.com/users/davidgilbertson/followers", "following_url": "https://api.github.com/users/davidgilbertson/following{/other_user}", "gists_url": "https://api.github.com/users/davidgilbertson/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/davidgilbertson", "id": 4443482, "login": "davidgilbertson", "node_id": "MDQ6VXNlcjQ0NDM0ODI=", "organizations_url": "https://api.github.com/users/davidgilbertson/orgs", "received_events_url": "https://api.github.com/users/davidgilbertson/received_events", "repos_url": "https://api.github.com/users/davidgilbertson/repos", "site_admin": false, "starred_url": "https://api.github.com/users/davidgilbertson/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/davidgilbertson/subscriptions", "type": "User", "url": "https://api.github.com/users/davidgilbertson" }
[]
closed
false
null
[]
null
[ "Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because some hashes used by caching are based on pickle dumps of the function you pass to `map`.\r\n\r\nFinally you may copy the cache to another machine, but all the `cached-*.arrow` files are unlikely to be reloaded.", "OK good to know. Thanks @lhoestq !" ]
2023-02-28T00:53:06Z
2023-02-28T21:26:52Z
2023-02-28T21:26:52Z
NONE
null
null
null
### Describe the bug I would like to share cache between two machines (a Windows host machine and a WSL instance). I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads. I'm hoping that I can just copy/paste the cache files, but I notice that a lot of the file names start with the path name, e.g. `_home_davidg_.cache_huggingface_datasets_conll2003_default-451...98.lock` where `home/davidg` is where the cache is in WSL. This seems to suggest that the cache is not portable/cannot be centralised or shared. Is this the case, or are the files that start with path names not integral to the caching mechanism? Because copying the cache files _seems_ to work, but I'm not filled with confidence that something isn't going to break. A related issue, when trying to load a dataset that should come from cache (running in WSL, pointing to cache on the Windows host) it seemed to work fine, but it still uses a WSL directory for `.cache\huggingface\modules\datasets_modules`. I see nothing in the docs about this, or how to point it to a different place. I have asked a related question on the forum: https://discuss.huggingface.co/t/is-datasets-cache-operating-system-agnostic/32656 ### Steps to reproduce the bug View the cache directory in WSL/Windows. ### Expected behavior Cache can be shared between (virtual) machines and be transportable. It would be nice to have a simple way to say "Dear Hugging Face packages, please put ALL your cache in `blah/de/blah`" and have all the Hugging Face packages respect that single location. ### Environment info ``` - `datasets` version: 2.9.0 - Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31 - Python version: 3.10.8 - PyArrow version: 11.0.0 - Pandas version: 1.5.3 - ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5585/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/333
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/333/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/333/comments
https://api.github.com/repos/huggingface/datasets/issues/333/events
https://github.com/huggingface/datasets/pull/333
649,236,516
MDExOlB1bGxSZXF1ZXN0NDQyOTE1NDQ0
333
fix variable name typo
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00" }
[]
closed
false
null
[]
null
[ "Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```", "Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it." ]
2020-07-01T19:13:50Z
2020-07-24T15:43:31Z
2020-07-24T08:32:16Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/333.diff", "html_url": "https://github.com/huggingface/datasets/pull/333", "merged_at": null, "patch_url": "https://github.com/huggingface/datasets/pull/333.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/333" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/333/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/333/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4775
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4775/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4775/comments
https://api.github.com/repos/huggingface/datasets/issues/4775/events
https://github.com/huggingface/datasets/issues/4775
1,324,136,486
I_kwDODunzps5O7Lgm
4,775
Streaming not supported in Theivaprakasham/wildreceipt
{ "avatar_url": "https://avatars.githubusercontent.com/u/100361173?v=4", "events_url": "https://api.github.com/users/NitishkKarra/events{/privacy}", "followers_url": "https://api.github.com/users/NitishkKarra/followers", "following_url": "https://api.github.com/users/NitishkKarra/following{/other_user}", "gists_url": "https://api.github.com/users/NitishkKarra/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NitishkKarra", "id": 100361173, "login": "NitishkKarra", "node_id": "U_kgDOBftj1Q", "organizations_url": "https://api.github.com/users/NitishkKarra/orgs", "received_events_url": "https://api.github.com/users/NitishkKarra/received_events", "repos_url": "https://api.github.com/users/NitishkKarra/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NitishkKarra/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NitishkKarra/subscriptions", "type": "User", "url": "https://api.github.com/users/NitishkKarra" }
[ { "color": "fef2c0", "default": false, "description": "", "id": 3287858981, "name": "streaming", "node_id": "MDU6TGFiZWwzMjg3ODU4OTgx", "url": "https://api.github.com/repos/huggingface/datasets/labels/streaming" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "Thanks for reporting @NitishkKarra.\r\n\r\nThe root source of the issue is that streaming mode is not supported out-of-the-box for that dataset, because it contains a TAR file.\r\n\r\nWe have opened a discussion in the corresponding Hub dataset page, pointing out this issue: https://huggingface.co/datasets/Theivaprakasham/wildreceipt/discussions/1\r\n\r\nI'm closing this issue here, so this discussion is transferred there instead." ]
2022-08-01T09:46:17Z
2022-08-01T10:30:29Z
2022-08-01T10:30:29Z
NONE
null
null
null
### Link _No response_ ### Description _No response_ ### Owner _No response_
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4775/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4775/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4093
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4093/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4093/comments
https://api.github.com/repos/huggingface/datasets/issues/4093/events
https://github.com/huggingface/datasets/issues/4093
1,192,523,161
I_kwDODunzps5HFHWZ
4,093
elena-soare/crawled-ecommerce: missing dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17519354?v=4", "events_url": "https://api.github.com/users/seevaratnam/events{/privacy}", "followers_url": "https://api.github.com/users/seevaratnam/followers", "following_url": "https://api.github.com/users/seevaratnam/following{/other_user}", "gists_url": "https://api.github.com/users/seevaratnam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/seevaratnam", "id": 17519354, "login": "seevaratnam", "node_id": "MDQ6VXNlcjE3NTE5MzU0", "organizations_url": "https://api.github.com/users/seevaratnam/orgs", "received_events_url": "https://api.github.com/users/seevaratnam/received_events", "repos_url": "https://api.github.com/users/seevaratnam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/seevaratnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/seevaratnam/subscriptions", "type": "User", "url": "https://api.github.com/users/seevaratnam" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4", "events_url": "https://api.github.com/users/severo/events{/privacy}", "followers_url": "https://api.github.com/users/severo/followers", "following_url": "https://api.github.com/users/severo/following{/other_user}", "gists_url": "https://api.github.com/users/severo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/severo", "id": 1676121, "login": "severo", "node_id": "MDQ6VXNlcjE2NzYxMjE=", "organizations_url": "https://api.github.com/users/severo/orgs", "received_events_url": "https://api.github.com/users/severo/received_events", "repos_url": "https://api.github.com/users/severo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/severo/subscriptions", "type": "User", "url": "https://api.github.com/users/severo" } ]
null
[ "It's a bug! Thanks for reporting, I'm looking at it.", "By the way, the error on our part is due to the huge size of every row (~90MB). The dataset viewer does not support such big dataset rows for the moment.\r\nAnyway, we're working to give a hint about this in the dataset viewer.", "Fixed. See https://huggingface.co/datasets/elena-soare/crawled-ecommerce/viewer/elena-soare--crawled-ecommerce/train.\r\n\r\n<img width=\"1552\" alt=\"Capture d’écran 2022-04-12 à 11 23 51\" src=\"https://user-images.githubusercontent.com/1676121/162929722-2e2b80e2-154a-4b61-87bd-e341bd6c46e6.png\">\r\n\r\nThanks for reporting!" ]
2022-04-05T02:25:19Z
2022-04-12T09:34:53Z
2022-04-12T09:34:53Z
NONE
null
null
null
elena-soare/crawled-ecommerce **Link:** *link to the dataset viewer page* *short description of the issue* Am I the one who added this dataset ? Yes-No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4093/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4093/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4419
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4419/comments
https://api.github.com/repos/huggingface/datasets/issues/4419/events
https://github.com/huggingface/datasets/issues/4419
1,252,652,896
I_kwDODunzps5Kqfdg
4,419
Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual`
{ "avatar_url": "https://avatars.githubusercontent.com/u/36760800?v=4", "events_url": "https://api.github.com/users/alvarobartt/events{/privacy}", "followers_url": "https://api.github.com/users/alvarobartt/followers", "following_url": "https://api.github.com/users/alvarobartt/following{/other_user}", "gists_url": "https://api.github.com/users/alvarobartt/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/alvarobartt", "id": 36760800, "login": "alvarobartt", "node_id": "MDQ6VXNlcjM2NzYwODAw", "organizations_url": "https://api.github.com/users/alvarobartt/orgs", "received_events_url": "https://api.github.com/users/alvarobartt/received_events", "repos_url": "https://api.github.com/users/alvarobartt/repos", "site_admin": false, "starred_url": "https://api.github.com/users/alvarobartt/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/alvarobartt/subscriptions", "type": "User", "url": "https://api.github.com/users/alvarobartt" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.", "Hi @mariosasko, right! I'll update the issue title/desc with `assertTupleEqual` even though as you said it seems to be internally using `assertEqual` so I'm not sure whether it's worth it or not...\r\n\r\nhttps://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual", "I thought we were supposed to move gradually from `unittest` to `pytest`..." ]
2022-05-30T12:13:18Z
2022-09-30T16:01:37Z
2022-09-30T16:01:37Z
CONTRIBUTOR
null
null
null
**Is your feature request related to a problem? Please describe.** So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating. Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570 **Describe the solution you'd like** Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`. **Additional context** If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4419/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2229
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2229/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2229/comments
https://api.github.com/repos/huggingface/datasets/issues/2229/events
https://github.com/huggingface/datasets/issues/2229
859,810,602
MDU6SXNzdWU4NTk4MTA2MDI=
2,229
`xnli` dataset creating a tuple key while yielding instead of `str` or `int`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42388668?v=4", "events_url": "https://api.github.com/users/NikhilBartwal/events{/privacy}", "followers_url": "https://api.github.com/users/NikhilBartwal/followers", "following_url": "https://api.github.com/users/NikhilBartwal/following{/other_user}", "gists_url": "https://api.github.com/users/NikhilBartwal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/NikhilBartwal", "id": 42388668, "login": "NikhilBartwal", "node_id": "MDQ6VXNlcjQyMzg4NjY4", "organizations_url": "https://api.github.com/users/NikhilBartwal/orgs", "received_events_url": "https://api.github.com/users/NikhilBartwal/received_events", "repos_url": "https://api.github.com/users/NikhilBartwal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/NikhilBartwal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/NikhilBartwal/subscriptions", "type": "User", "url": "https://api.github.com/users/NikhilBartwal" }
[]
closed
false
null
[]
null
[ "Hi ! Sure sounds good. Also if you find other datasets that use tuples instead of str/int, you can also fix them !\r\nthanks :)", "@lhoestq I have sent a PR for fixing the issue. Would be great if you could have a look! Thanks!" ]
2021-04-16T13:21:53Z
2021-04-19T08:56:42Z
2021-04-19T08:56:42Z
CONTRIBUTOR
null
null
null
When using `ds = datasets.load_dataset('xnli', 'ar')`, the dataset generation script uses the following section of code in the egging, which yields a tuple key instead of the specified `str` or `int` key: https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196 Since, community datasets in Tensorflow Datasets also use HF datasets, this causes a Tuple key error while loading HF's `xnli` dataset. I'm up for sending a fix for this, I think we can simply use `file_idx + "_" + row_idx` as a unique key instead of a tuple.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2229/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2229/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4562
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4562/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4562/comments
https://api.github.com/repos/huggingface/datasets/issues/4562/events
https://github.com/huggingface/datasets/issues/4562
1,283,779,557
I_kwDODunzps5MhOvl
4,562
Dataset Viewer issue for allocine
{ "avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4", "events_url": "https://api.github.com/users/lewtun/events{/privacy}", "followers_url": "https://api.github.com/users/lewtun/followers", "following_url": "https://api.github.com/users/lewtun/following{/other_user}", "gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lewtun", "id": 26859204, "login": "lewtun", "node_id": "MDQ6VXNlcjI2ODU5MjA0", "organizations_url": "https://api.github.com/users/lewtun/orgs", "received_events_url": "https://api.github.com/users/lewtun/received_events", "repos_url": "https://api.github.com/users/lewtun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lewtun/subscriptions", "type": "User", "url": "https://api.github.com/users/lewtun" }
[ { "color": "E5583E", "default": false, "description": "Related to the dataset viewer on huggingface.co", "id": 3470211881, "name": "dataset-viewer", "node_id": "LA_kwDODunzps7O1zsp", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" } ]
null
[ "I removed my assignment as @huggingface/datasets should be able to answer better than me\r\n", "Let me have a look...", "Thanks for the quick fix @albertvillanova ", "Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content *sequentially* (no random access).", "> Note that the underlying issue is that datasets containing TAR files are not streamable out of the box: they need being iterated with `dl_manager.iter_archive` to avoid performance issues because they access their file content _sequentially_ (no random access).\r\n\r\nAh thanks for the clarification! I'll look out for this next time and implement the fix myself :)" ]
2022-06-24T13:50:38Z
2022-06-27T06:39:32Z
2022-06-24T16:44:41Z
MEMBER
null
null
null
### Link https://huggingface.co/datasets/allocine ### Description Not sure if this is a problem with `bz2` compression, but I thought these datasets could be streamed: ``` Status code: 400 Exception: AttributeError Message: 'TarContainedFile' object has no attribute 'readable' ``` ### Owner No
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4562/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4562/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4246
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4246/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4246/comments
https://api.github.com/repos/huggingface/datasets/issues/4246/events
https://github.com/huggingface/datasets/pull/4246
1,218,320,293
PR_kwDODunzps427NiD
4,246
Support to load dataset with TSV files by passing only dataset name
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-04-28T07:30:15Z
2022-05-06T08:38:28Z
2022-05-06T08:14:07Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4246.diff", "html_url": "https://github.com/huggingface/datasets/pull/4246", "merged_at": "2022-05-06T08:14:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/4246.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4246" }
This PR implements support to load a dataset (w/o script) containing TSV files by passing only the dataset name (no need to pass `sep='\t'`): ```python ds = load_dataset("dataset/name") ``` The refactoring allows for future builder kwargs customizations based on file extension. Related to #4238.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4246/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4246/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3652/comments
https://api.github.com/repos/huggingface/datasets/issues/3652/events
https://github.com/huggingface/datasets/pull/3652
1,118,808,738
PR_kwDODunzps4xzinr
3,652
sp. Columbia => Colombia
{ "avatar_url": "https://avatars.githubusercontent.com/u/3781280?v=4", "events_url": "https://api.github.com/users/serapio/events{/privacy}", "followers_url": "https://api.github.com/users/serapio/followers", "following_url": "https://api.github.com/users/serapio/following{/other_user}", "gists_url": "https://api.github.com/users/serapio/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/serapio", "id": 3781280, "login": "serapio", "node_id": "MDQ6VXNlcjM3ODEyODA=", "organizations_url": "https://api.github.com/users/serapio/orgs", "received_events_url": "https://api.github.com/users/serapio/received_events", "repos_url": "https://api.github.com/users/serapio/repos", "site_admin": false, "starred_url": "https://api.github.com/users/serapio/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/serapio/subscriptions", "type": "User", "url": "https://api.github.com/users/serapio" }
[]
closed
false
null
[]
null
[ "The original openslr site mixed both names https://openslr.org/72/ :-)", "Yeah, I filed the issue to have it fixed there last year, but it looks like they missed a few." ]
2022-01-31T00:41:03Z
2022-02-09T16:55:25Z
2022-01-31T08:29:07Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/3652.diff", "html_url": "https://github.com/huggingface/datasets/pull/3652", "merged_at": "2022-01-31T08:29:07Z", "patch_url": "https://github.com/huggingface/datasets/pull/3652.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/3652" }
"Columbia" is various places in North America. The country is "Colombia".
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3652/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3652/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/4050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4050/comments
https://api.github.com/repos/huggingface/datasets/issues/4050/events
https://github.com/huggingface/datasets/pull/4050
1,184,346,501
PR_kwDODunzps41NAMF
4,050
Add RVL-CDIP dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17746528?v=4", "events_url": "https://api.github.com/users/dnaveenr/events{/privacy}", "followers_url": "https://api.github.com/users/dnaveenr/followers", "following_url": "https://api.github.com/users/dnaveenr/following{/other_user}", "gists_url": "https://api.github.com/users/dnaveenr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dnaveenr", "id": 17746528, "login": "dnaveenr", "node_id": "MDQ6VXNlcjE3NzQ2NTI4", "organizations_url": "https://api.github.com/users/dnaveenr/orgs", "received_events_url": "https://api.github.com/users/dnaveenr/received_events", "repos_url": "https://api.github.com/users/dnaveenr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dnaveenr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dnaveenr/subscriptions", "type": "User", "url": "https://api.github.com/users/dnaveenr" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks a lot for inputs. I'll use the URL suggested and check.\r\n\r\n> we need to implement the streamable (can't use os.path.join) and the non-streamable versions of _generate_examples.\r\n\r\nSure. I will check the reference and try this out, will get back to you if I face any issues.\r\n\r\n> The labels-only data file URL doesn't work for me, so feel free to ask the authors whether they are OK with us hosting the file on the Hub/S3 (to speed up the streamable version)\r\n\r\nJust checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?", "> Just checked. The author (Adam Harley) has responded positively and allowed us to host the file. Do I share the file with you for hosting it on Hub/S3 ?\r\n\r\nYes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.", "> You can use this URL to avoid manual download: https://drive.google.com/uc?export=download&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc\r\n\r\nFor some reason, the direct download doesn't seem to work for me even with this URL. \r\n```\r\nDownloading and preparing dataset rvl_cdip/default to ~/.cache/huggingface/datasets/rvl_cdip/default/1.0.0/ea152149e06310d60a9ef3c3020199dd4780bb952a773ba5aac6b57d59f12628...\r\nDownloading data files: 100%|█████████████████████████████████████████████████████| 1/1 [00:00<00:00, 6307.22it/s]\r\n{'rvl-cdip': '~/.cache/huggingface/datasets/downloads/07ef956a33750078d570d76fefe9fed49f7dc32ecf6e872d690de11e66bbe869'}\r\n```\r\nAnd this directory does not exist. Am I doing something wrong ?\r\nTo verify, I tried using [gdown](https://github.com/wkentaro/gdown) for the above URL, we get the following : \r\n```\r\nAccess denied with the following error:\r\n\r\n Cannot retrieve the public link of the file. You may need to change\r\n the permission to 'Anyone with the link', or have had many accesses. \r\n\r\nYou may still be able to access the file from the browser:\r\n```\r\n----\r\n\r\n> Yes, feel free to e-mail me the file. Then I'll create a repo under my namespace and push the file there. We run a GH action on a GH dataset after merging to create its repo on the Hub, so after this PR is merged, I'll push the file to the \"official\" namespace and update the download link.\r\n\r\nGot it. I've sent you an email with the file. Thank you.", "Actually this URL works for direct download :\r\n`https://drive.google.com/uc?export=download&confirm=pbef&id=0Bz1dfcnrpXM-MUt4cHNzUEFXcmc`\r\nRef : https://github.com/wkentaro/gdown/issues/146#issuecomment-1042382215\r\n\r\nI'm working on the streamable versions of _generate_examples as well, will update you regarding this.", "Google Drive is a tricky host, and it's easy to exceed daily download quota limits, so if we are allowed to host the `rvl-cdip.tar.gz` file, I can push it to the Hub.", "Just checked, the authors have agreed. He mentioned that he had complaints about the GDrive link.\r\nYou can push it to the Hub and share the link. :)", "I have added :\r\n- streaming support for rvl-cdip.tar.gz file. [ Need to test this ]\r\n\r\nIs it possible for you to upload the train.txt, test.txt, val.txt files separately to the Hub instead of labels_only.tar.gz file.\r\nCurrently during the tests in stream mode, we get : \r\n`NotImplementedError: Extraction protocol for TAR archives like 'https://huggingface.co/datasets/mariosasko/rvl_cdip/resolve/main/labels_only.tar.gz' is not implemented in streaming mode. Please use dl_manager.iter_archive instead.`\r\nIf the label files are present as .txt files then we can directly use dl_manager.download.\r\n\r\n\r\n", "The rvl-cdip.tar.gz archive and txt files with the labels are on the Hub!", "- Added 🤗 Hub download links.\r\n- streamable and non-streamable versions of _generate_examples.\r\n- Updated dummy data, both real and dummy dataset tests have passed.\r\n\r\n", "I've removed the extraction of the archive file locally as suggested. Let me know if any other changes are required. :)", "The check for **Update Hub repositories / update-hub-repositories** has failed.\r\n\r\n> https://github.com/huggingface/datasets/runs/6116502392?check_suite_focus=true\r\n\r\n", "Hi ! Thanks for reporting ;) yes this CI job has been failing for a few days. I'm working on fixing it, and I'm manually running it on my side in the meantime", "Great. :D Thank you @lhoestq " ]
2022-03-29T06:00:02Z
2022-04-22T09:55:07Z
2022-04-21T17:15:41Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4050.diff", "html_url": "https://github.com/huggingface/datasets/pull/4050", "merged_at": "2022-04-21T17:15:41Z", "patch_url": "https://github.com/huggingface/datasets/pull/4050.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4050" }
Resolves #2762 Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762) This PR adds the RVL-CDIP dataset. The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions. - I have added the dummy_data.zip as well. Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ? Inputs and suggestions for improvement are welcome. Thank you.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4050/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5915
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5915/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5915/comments
https://api.github.com/repos/huggingface/datasets/issues/5915/events
https://github.com/huggingface/datasets/pull/5915
1,732,389,984
PR_kwDODunzps5RsVzj
5,915
Raise error in `DatasetBuilder.as_dataset` when `file_format` is not `"arrow"`
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko" }
[]
closed
false
null
[]
null
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006416 / 0.011353 (-0.004937) | 0.004278 / 0.011008 (-0.006731) | 0.097562 / 0.038508 (0.059054) | 0.029488 / 0.023109 (0.006379) | 0.308648 / 0.275898 (0.032750) | 0.339879 / 0.323480 (0.016399) | 0.005288 / 0.007986 (-0.002697) | 0.005033 / 0.004328 (0.000704) | 0.074666 / 0.004250 (0.070416) | 0.034888 / 0.037052 (-0.002164) | 0.309960 / 0.258489 (0.051471) | 0.344276 / 0.293841 (0.050435) | 0.025564 / 0.128546 (-0.102982) | 0.008579 / 0.075646 (-0.067067) | 0.319796 / 0.419271 (-0.099476) | 0.044786 / 0.043533 (0.001253) | 0.308888 / 0.255139 (0.053749) | 0.334001 / 0.283200 (0.050802) | 0.089917 / 0.141683 (-0.051766) | 1.456696 / 1.452155 (0.004541) | 1.542273 / 1.492716 (0.049557) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.213236 / 0.018006 (0.195230) | 0.425139 / 0.000490 (0.424650) | 0.008831 / 0.000200 (0.008631) | 0.000209 / 0.000054 (0.000155) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.023990 / 0.037411 (-0.013421) | 0.096787 / 0.014526 (0.082261) | 0.105783 / 0.176557 (-0.070774) | 0.167182 / 0.737135 (-0.569954) | 0.108896 / 0.296338 (-0.187442) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.419844 / 0.215209 (0.204635) | 4.201909 / 2.077655 (2.124254) | 1.910784 / 1.504120 (0.406664) | 1.685183 / 1.541195 (0.143988) | 1.716927 / 1.468490 (0.248437) | 0.548261 / 4.584777 (-4.036516) | 3.414168 / 3.745712 (-0.331544) | 1.695446 / 5.269862 (-3.574415) | 0.989668 / 4.565676 (-3.576008) | 0.067328 / 0.424275 (-0.356948) | 0.012084 / 0.007607 (0.004477) | 0.523799 / 0.226044 (0.297754) | 5.240589 / 2.268929 (2.971661) | 2.331618 / 55.444624 (-53.113007) | 1.996094 / 6.876477 (-4.880383) | 2.105450 / 2.142072 (-0.036623) | 0.654614 / 4.805227 (-4.150613) | 0.134721 / 6.500664 (-6.365943) | 0.066227 / 0.075469 (-0.009242) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.196266 / 1.841788 (-0.645521) | 13.990045 / 8.074308 (5.915737) | 13.928126 / 10.191392 (3.736734) | 0.142600 / 0.680424 (-0.537824) | 0.016462 / 0.534201 (-0.517739) | 0.363113 / 0.579283 (-0.216170) | 0.428590 / 0.434364 (-0.005773) | 0.452594 / 0.540337 (-0.087743) | 0.551678 / 1.386936 (-0.835258) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005992 / 0.011353 (-0.005361) | 0.004161 / 0.011008 (-0.006847) | 0.076098 / 0.038508 (0.037589) | 0.028559 / 0.023109 (0.005450) | 0.411696 / 0.275898 (0.135798) | 0.444519 / 0.323480 (0.121040) | 0.004965 / 0.007986 (-0.003021) | 0.003452 / 0.004328 (-0.000876) | 0.075107 / 0.004250 (0.070857) | 0.037305 / 0.037052 (0.000252) | 0.429728 / 0.258489 (0.171239) | 0.444313 / 0.293841 (0.150472) | 0.025278 / 0.128546 (-0.103268) | 0.008527 / 0.075646 (-0.067120) | 0.081502 / 0.419271 (-0.337770) | 0.041237 / 0.043533 (-0.002296) | 0.417848 / 0.255139 (0.162709) | 0.426615 / 0.283200 (0.143415) | 0.094641 / 0.141683 (-0.047041) | 1.525141 / 1.452155 (0.072987) | 1.615608 / 1.492716 (0.122892) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.192867 / 0.018006 (0.174861) | 0.414979 / 0.000490 (0.414490) | 0.000815 / 0.000200 (0.000615) | 0.000068 / 0.000054 (0.000013) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025354 / 0.037411 (-0.012058) | 0.102085 / 0.014526 (0.087559) | 0.107930 / 0.176557 (-0.068626) | 0.160483 / 0.737135 (-0.576652) | 0.112341 / 0.296338 (-0.183997) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.446938 / 0.215209 (0.231728) | 4.480057 / 2.077655 (2.402402) | 2.154825 / 1.504120 (0.650705) | 1.942774 / 1.541195 (0.401580) | 1.996418 / 1.468490 (0.527928) | 0.556728 / 4.584777 (-4.028049) | 3.441228 / 3.745712 (-0.304484) | 3.004179 / 5.269862 (-2.265683) | 1.314104 / 4.565676 (-3.251573) | 0.068670 / 0.424275 (-0.355606) | 0.011972 / 0.007607 (0.004365) | 0.556604 / 0.226044 (0.330560) | 5.561783 / 2.268929 (3.292855) | 2.631262 / 55.444624 (-52.813363) | 2.262143 / 6.876477 (-4.614333) | 2.364243 / 2.142072 (0.222170) | 0.660621 / 4.805227 (-4.144607) | 0.137371 / 6.500664 (-6.363293) | 0.069104 / 0.075469 (-0.006365) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.305706 / 1.841788 (-0.536081) | 14.015932 / 8.074308 (5.941624) | 14.353580 / 10.191392 (4.162187) | 0.146172 / 0.680424 (-0.534251) | 0.016699 / 0.534201 (-0.517502) | 0.357970 / 0.579283 (-0.221313) | 0.389067 / 0.434364 (-0.045297) | 0.415470 / 0.540337 (-0.124867) | 0.501359 / 1.386936 (-0.885577) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#b2b837b4e7267db9e32d2613d8bf8d70d2ce0b47 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006800 / 0.011353 (-0.004552) | 0.004721 / 0.011008 (-0.006287) | 0.097760 / 0.038508 (0.059252) | 0.034192 / 0.023109 (0.011083) | 0.298240 / 0.275898 (0.022342) | 0.331119 / 0.323480 (0.007639) | 0.005826 / 0.007986 (-0.002160) | 0.003968 / 0.004328 (-0.000360) | 0.073833 / 0.004250 (0.069582) | 0.046288 / 0.037052 (0.009236) | 0.303018 / 0.258489 (0.044529) | 0.342163 / 0.293841 (0.048322) | 0.028504 / 0.128546 (-0.100042) | 0.009031 / 0.075646 (-0.066615) | 0.331617 / 0.419271 (-0.087655) | 0.060911 / 0.043533 (0.017379) | 0.304044 / 0.255139 (0.048905) | 0.328959 / 0.283200 (0.045759) | 0.113174 / 0.141683 (-0.028509) | 1.424652 / 1.452155 (-0.027502) | 1.531392 / 1.492716 (0.038676) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.206175 / 0.018006 (0.188169) | 0.435916 / 0.000490 (0.435426) | 0.002587 / 0.000200 (0.002387) | 0.000083 / 0.000054 (0.000029) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.026996 / 0.037411 (-0.010415) | 0.106722 / 0.014526 (0.092196) | 0.117655 / 0.176557 (-0.058902) | 0.176969 / 0.737135 (-0.560166) | 0.122577 / 0.296338 (-0.173762) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.396086 / 0.215209 (0.180877) | 3.972465 / 2.077655 (1.894811) | 1.800798 / 1.504120 (0.296678) | 1.616747 / 1.541195 (0.075552) | 1.680711 / 1.468490 (0.212221) | 0.526479 / 4.584777 (-4.058298) | 3.791528 / 3.745712 (0.045816) | 2.989518 / 5.269862 (-2.280344) | 1.463221 / 4.565676 (-3.102455) | 0.065649 / 0.424275 (-0.358626) | 0.012155 / 0.007607 (0.004548) | 0.500241 / 0.226044 (0.274197) | 5.008895 / 2.268929 (2.739966) | 2.315288 / 55.444624 (-53.129336) | 1.959409 / 6.876477 (-4.917067) | 2.102371 / 2.142072 (-0.039701) | 0.639611 / 4.805227 (-4.165617) | 0.140101 / 6.500664 (-6.360563) | 0.063599 / 0.075469 (-0.011870) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.206729 / 1.841788 (-0.635059) | 15.127250 / 8.074308 (7.052942) | 14.397228 / 10.191392 (4.205836) | 0.148802 / 0.680424 (-0.531622) | 0.017628 / 0.534201 (-0.516573) | 0.396150 / 0.579283 (-0.183133) | 0.435826 / 0.434364 (0.001462) | 0.471215 / 0.540337 (-0.069122) | 0.559413 / 1.386936 (-0.827523) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006479 / 0.011353 (-0.004874) | 0.004520 / 0.011008 (-0.006488) | 0.074395 / 0.038508 (0.035887) | 0.033400 / 0.023109 (0.010291) | 0.388411 / 0.275898 (0.112513) | 0.396714 / 0.323480 (0.073234) | 0.005736 / 0.007986 (-0.002250) | 0.004038 / 0.004328 (-0.000291) | 0.073595 / 0.004250 (0.069345) | 0.045207 / 0.037052 (0.008155) | 0.378096 / 0.258489 (0.119607) | 0.417830 / 0.293841 (0.123989) | 0.028365 / 0.128546 (-0.100181) | 0.008887 / 0.075646 (-0.066760) | 0.080766 / 0.419271 (-0.338505) | 0.046923 / 0.043533 (0.003390) | 0.376190 / 0.255139 (0.121051) | 0.385875 / 0.283200 (0.102675) | 0.107542 / 0.141683 (-0.034141) | 1.409257 / 1.452155 (-0.042898) | 1.518475 / 1.492716 (0.025759) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.223299 / 0.018006 (0.205292) | 0.440640 / 0.000490 (0.440150) | 0.000397 / 0.000200 (0.000197) | 0.000056 / 0.000054 (0.000002) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.031388 / 0.037411 (-0.006024) | 0.113078 / 0.014526 (0.098552) | 0.124398 / 0.176557 (-0.052159) | 0.173802 / 0.737135 (-0.563333) | 0.129555 / 0.296338 (-0.166783) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.440220 / 0.215209 (0.225011) | 4.398052 / 2.077655 (2.320398) | 2.188396 / 1.504120 (0.684276) | 1.997811 / 1.541195 (0.456616) | 2.093338 / 1.468490 (0.624847) | 0.519597 / 4.584777 (-4.065180) | 3.885795 / 3.745712 (0.140083) | 2.896327 / 5.269862 (-2.373534) | 1.245785 / 4.565676 (-3.319891) | 0.065675 / 0.424275 (-0.358600) | 0.011729 / 0.007607 (0.004121) | 0.541526 / 0.226044 (0.315482) | 5.406763 / 2.268929 (3.137834) | 2.722914 / 55.444624 (-52.721711) | 2.471111 / 6.876477 (-4.405366) | 2.541488 / 2.142072 (0.399415) | 0.633566 / 4.805227 (-4.171661) | 0.139622 / 6.500664 (-6.361042) | 0.064220 / 0.075469 (-0.011249) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.296097 / 1.841788 (-0.545690) | 15.095320 / 8.074308 (7.021012) | 14.300821 / 10.191392 (4.109429) | 0.145470 / 0.680424 (-0.534954) | 0.017496 / 0.534201 (-0.516705) | 0.400589 / 0.579283 (-0.178694) | 0.423091 / 0.434364 (-0.011273) | 0.468258 / 0.540337 (-0.072079) | 0.570873 / 1.386936 (-0.816063) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#aee6c67034d6ff298b2153a2fcdab97f14ee6d66 \"CML watermark\")\n", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.005918 / 0.011353 (-0.005435) | 0.004393 / 0.011008 (-0.006615) | 0.091677 / 0.038508 (0.053169) | 0.033546 / 0.023109 (0.010437) | 0.344682 / 0.275898 (0.068784) | 0.388906 / 0.323480 (0.065426) | 0.005412 / 0.007986 (-0.002574) | 0.004909 / 0.004328 (0.000580) | 0.082589 / 0.004250 (0.078339) | 0.045242 / 0.037052 (0.008190) | 0.339191 / 0.258489 (0.080702) | 0.349673 / 0.293841 (0.055832) | 0.026805 / 0.128546 (-0.101742) | 0.007529 / 0.075646 (-0.068117) | 0.319108 / 0.419271 (-0.100164) | 0.049482 / 0.043533 (0.005949) | 0.320013 / 0.255139 (0.064874) | 0.342059 / 0.283200 (0.058859) | 0.096623 / 0.141683 (-0.045060) | 1.458204 / 1.452155 (0.006049) | 1.571172 / 1.492716 (0.078455) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.235171 / 0.018006 (0.217165) | 0.479678 / 0.000490 (0.479188) | 0.006627 / 0.000200 (0.006427) | 0.000257 / 0.000054 (0.000202) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.025716 / 0.037411 (-0.011696) | 0.107730 / 0.014526 (0.093204) | 0.111595 / 0.176557 (-0.064962) | 0.171316 / 0.737135 (-0.565819) | 0.118962 / 0.296338 (-0.177377) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.376318 / 0.215209 (0.161109) | 4.039484 / 2.077655 (1.961829) | 1.811548 / 1.504120 (0.307428) | 1.646728 / 1.541195 (0.105533) | 1.688071 / 1.468490 (0.219581) | 0.551256 / 4.584777 (-4.033520) | 4.153931 / 3.745712 (0.408218) | 3.424154 / 5.269862 (-1.845707) | 1.734860 / 4.565676 (-2.830816) | 0.067753 / 0.424275 (-0.356522) | 0.012699 / 0.007607 (0.005092) | 0.505722 / 0.226044 (0.279677) | 4.997321 / 2.268929 (2.728392) | 2.258755 / 55.444624 (-53.185869) | 1.954382 / 6.876477 (-4.922095) | 1.967545 / 2.142072 (-0.174527) | 0.630489 / 4.805227 (-4.174738) | 0.138738 / 6.500664 (-6.361926) | 0.064907 / 0.075469 (-0.010562) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.209634 / 1.841788 (-0.632154) | 15.055062 / 8.074308 (6.980754) | 12.721606 / 10.191392 (2.530214) | 0.164908 / 0.680424 (-0.515516) | 0.019528 / 0.534201 (-0.514673) | 0.400136 / 0.579283 (-0.179147) | 0.451640 / 0.434364 (0.017276) | 0.466272 / 0.540337 (-0.074065) | 0.553258 / 1.386936 (-0.833679) |\n\n</details>\nPyArrow==latest\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_as_numpy after write_nested_sequence | read_batch_unformated after write_array2d | read_batch_unformated after write_flattened_sequence | read_batch_unformated after write_nested_sequence | read_col_formatted_as_numpy after write_array2d | read_col_formatted_as_numpy after write_flattened_sequence | read_col_formatted_as_numpy after write_nested_sequence | read_col_unformated after write_array2d | read_col_unformated after write_flattened_sequence | read_col_unformated after write_nested_sequence | read_formatted_as_numpy after write_array2d | read_formatted_as_numpy after write_flattened_sequence | read_formatted_as_numpy after write_nested_sequence | read_unformated after write_array2d | read_unformated after write_flattened_sequence | read_unformated after write_nested_sequence | write_array2d | write_flattened_sequence | write_nested_sequence |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.006341 / 0.011353 (-0.005011) | 0.004617 / 0.011008 (-0.006391) | 0.077953 / 0.038508 (0.039445) | 0.031104 / 0.023109 (0.007995) | 0.360328 / 0.275898 (0.084430) | 0.408403 / 0.323480 (0.084923) | 0.005704 / 0.007986 (-0.002282) | 0.003588 / 0.004328 (-0.000741) | 0.071441 / 0.004250 (0.067190) | 0.043520 / 0.037052 (0.006468) | 0.375798 / 0.258489 (0.117309) | 0.400955 / 0.293841 (0.107114) | 0.028166 / 0.128546 (-0.100381) | 0.008578 / 0.075646 (-0.067068) | 0.086673 / 0.419271 (-0.332598) | 0.046424 / 0.043533 (0.002891) | 0.367276 / 0.255139 (0.112137) | 0.414550 / 0.283200 (0.131351) | 0.097355 / 0.141683 (-0.044328) | 1.465191 / 1.452155 (0.013036) | 1.555028 / 1.492716 (0.062312) |\n\n### Benchmark: benchmark_getitem\\_100B.json\n\n| metric | get_batch_of\\_1024\\_random_rows | get_batch_of\\_1024\\_rows | get_first_row | get_last_row |\n|--------|---|---|---|---|\n| new / old (diff) | 0.196642 / 0.018006 (0.178636) | 0.464221 / 0.000490 (0.463731) | 0.002726 / 0.000200 (0.002526) | 0.000110 / 0.000054 (0.000055) |\n\n### Benchmark: benchmark_indices_mapping.json\n\n| metric | select | shard | shuffle | sort | train_test_split |\n|--------|---|---|---|---|---|\n| new / old (diff) | 0.028078 / 0.037411 (-0.009333) | 0.110762 / 0.014526 (0.096236) | 0.122212 / 0.176557 (-0.054344) | 0.164758 / 0.737135 (-0.572377) | 0.133969 / 0.296338 (-0.162370) |\n\n### Benchmark: benchmark_iterating.json\n\n| metric | read 5000 | read 50000 | read_batch 50000 10 | read_batch 50000 100 | read_batch 50000 1000 | read_formatted numpy 5000 | read_formatted pandas 5000 | read_formatted tensorflow 5000 | read_formatted torch 5000 | read_formatted_batch numpy 5000 10 | read_formatted_batch numpy 5000 1000 | shuffled read 5000 | shuffled read 50000 | shuffled read_batch 50000 10 | shuffled read_batch 50000 100 | shuffled read_batch 50000 1000 | shuffled read_formatted numpy 5000 | shuffled read_formatted_batch numpy 5000 10 | shuffled read_formatted_batch numpy 5000 1000 |\n|--------|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 0.448134 / 0.215209 (0.232925) | 4.339335 / 2.077655 (2.261680) | 2.129209 / 1.504120 (0.625089) | 1.957805 / 1.541195 (0.416611) | 1.994038 / 1.468490 (0.525548) | 0.497101 / 4.584777 (-4.087676) | 4.114432 / 3.745712 (0.368720) | 3.437305 / 5.269862 (-1.832556) | 1.692810 / 4.565676 (-2.872866) | 0.071077 / 0.424275 (-0.353198) | 0.012735 / 0.007607 (0.005128) | 0.534393 / 0.226044 (0.308348) | 5.217445 / 2.268929 (2.948517) | 2.594858 / 55.444624 (-52.849766) | 2.317464 / 6.876477 (-4.559012) | 2.337974 / 2.142072 (0.195902) | 0.622291 / 4.805227 (-4.182936) | 0.144934 / 6.500664 (-6.355730) | 0.068524 / 0.075469 (-0.006945) |\n\n### Benchmark: benchmark_map_filter.json\n\n| metric | filter | map fast-tokenizer batched | map identity | map identity batched | map no-op batched | map no-op batched numpy | map no-op batched pandas | map no-op batched pytorch | map no-op batched tensorflow |\n|--------|---|---|---|---|---|---|---|---|---|\n| new / old (diff) | 1.310601 / 1.841788 (-0.531187) | 15.771527 / 8.074308 (7.697219) | 13.952032 / 10.191392 (3.760640) | 0.212473 / 0.680424 (-0.467951) | 0.017963 / 0.534201 (-0.516238) | 0.400755 / 0.579283 (-0.178528) | 0.439817 / 0.434364 (0.005453) | 0.472614 / 0.540337 (-0.067724) | 0.558410 / 1.386936 (-0.828526) |\n\n</details>\n</details>\n\n![](https://cml.dev/watermark.png#1b51429d02a0da1ff798873afe655309136c5689 \"CML watermark\")\n" ]
2023-05-30T14:27:55Z
2023-05-31T13:31:21Z
2023-05-31T13:23:54Z
CONTRIBUTOR
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/5915.diff", "html_url": "https://github.com/huggingface/datasets/pull/5915", "merged_at": "2023-05-31T13:23:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/5915.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/5915" }
Raise an error in `DatasetBuilder.as_dataset` when `file_format != "arrow"` (and fix the docstring) Fix #5874
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5915/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5915/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5889
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5889/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5889/comments
https://api.github.com/repos/huggingface/datasets/issues/5889/events
https://github.com/huggingface/datasets/issues/5889
1,722,373,618
I_kwDODunzps5mqVXy
5,889
Token Alignment for input and output data over train and test batch/dataset.
{ "avatar_url": "https://avatars.githubusercontent.com/u/125154243?v=4", "events_url": "https://api.github.com/users/akesh1235/events{/privacy}", "followers_url": "https://api.github.com/users/akesh1235/followers", "following_url": "https://api.github.com/users/akesh1235/following{/other_user}", "gists_url": "https://api.github.com/users/akesh1235/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/akesh1235", "id": 125154243, "login": "akesh1235", "node_id": "U_kgDOB3Wzww", "organizations_url": "https://api.github.com/users/akesh1235/orgs", "received_events_url": "https://api.github.com/users/akesh1235/received_events", "repos_url": "https://api.github.com/users/akesh1235/repos", "site_admin": false, "starred_url": "https://api.github.com/users/akesh1235/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/akesh1235/subscriptions", "type": "User", "url": "https://api.github.com/users/akesh1235" }
[]
open
false
null
[]
null
[]
2023-05-23T15:58:55Z
2023-05-23T15:58:55Z
null
NONE
null
null
null
`data` > DatasetDict({ train: Dataset({ features: ['input', 'output'], num_rows: 4500 }) test: Dataset({ features: ['input', 'output'], num_rows: 500 }) }) **# input (in-correct sentence)** `data['train'][0]['input']` **>>** 'We are meet sunday 10am12pmET in Crown Heights Brooklyn New York' **# output (correct sentence)** `data['train'][0]['output']` **>>** 'We meet Sundays 10am-12pmET in Crown Heights, Brooklyn, New York.' **I Want to align the output tokens with input** ``` `# tokenize both inputs and targets def tokenize_fn(batch): # tokenize the input sequence first # this populates input_ids, attention_mask, etc. tokenized_inputs = tokenizer( batch['input'] ) labels_batch = tokenizer.tokenize(batch['output']) # original targets aligned_labels_batch = [] for i, labels in enumerate(labels_batch): word_ids = tokenized_inputs[i].word_ids() aligned_labels_batch.append(align_targets(labels, word_ids)) # align_targets is another user defined function which is been called here # recall: the 'target' must be stored in key called 'labels' tokenized_inputs['labels'] = aligned_labels_batch return tokenized_inputs` ``` ``` data.map( tokenize_fn, batched=True, remove_columns=data['train'].column_names, ) ``` When this user defined function is mapped to every records of train and test batch am getting following error: **1.** **raise DatasetTransformationNotAllowedError( 3457 "Using `.map` in batched mode on a dataset with attached indexes is allowed only if it doesn't create or remove existing examples. You can first run `.drop_index() to remove your index and then re-add it."** **2.** **TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]**
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5889/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5889/timeline
null
null
false
https://api.github.com/repos/huggingface/datasets/issues/183
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/183/comments
https://api.github.com/repos/huggingface/datasets/issues/183/events
https://github.com/huggingface/datasets/issues/183
623,054,270
MDU6SXNzdWU2MjMwNTQyNzA=
183
[Bug] labels of glue/ax are all -1
{ "avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4", "events_url": "https://api.github.com/users/richarddwang/events{/privacy}", "followers_url": "https://api.github.com/users/richarddwang/followers", "following_url": "https://api.github.com/users/richarddwang/following{/other_user}", "gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/richarddwang", "id": 17963619, "login": "richarddwang", "node_id": "MDQ6VXNlcjE3OTYzNjE5", "organizations_url": "https://api.github.com/users/richarddwang/orgs", "received_events_url": "https://api.github.com/users/richarddwang/received_events", "repos_url": "https://api.github.com/users/richarddwang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions", "type": "User", "url": "https://api.github.com/users/richarddwang" }
[]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq" } ]
null
[ "This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.", "Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment." ]
2020-05-22T08:43:36Z
2020-05-22T22:14:05Z
2020-05-22T22:14:05Z
CONTRIBUTOR
null
null
null
``` ax = nlp.load_dataset('glue', 'ax') for i in range(30): print(ax['test'][i]['label'], end=', ') ``` ``` -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, ```
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/183/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/183/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/6206
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/6206/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/6206/comments
https://api.github.com/repos/huggingface/datasets/issues/6206/events
https://github.com/huggingface/datasets/issues/6206
1,879,473,745
I_kwDODunzps5wBn5R
6,206
When calling load_dataset, raise error: pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays
{ "avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4", "events_url": "https://api.github.com/users/aihao2000/events{/privacy}", "followers_url": "https://api.github.com/users/aihao2000/followers", "following_url": "https://api.github.com/users/aihao2000/following{/other_user}", "gists_url": "https://api.github.com/users/aihao2000/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/aihao2000", "id": 51043929, "login": "aihao2000", "node_id": "MDQ6VXNlcjUxMDQzOTI5", "organizations_url": "https://api.github.com/users/aihao2000/orgs", "received_events_url": "https://api.github.com/users/aihao2000/received_events", "repos_url": "https://api.github.com/users/aihao2000/repos", "site_admin": false, "starred_url": "https://api.github.com/users/aihao2000/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/aihao2000/subscriptions", "type": "User", "url": "https://api.github.com/users/aihao2000" }
[]
closed
false
null
[]
null
[ "I solved the problem by modifying the \"self DEFAULT_WRITER_BATCH_SIZE\" in \"class MyDataset (datasets. GeneratorBasedBuilder) : __init__\"" ]
2023-09-04T04:14:00Z
2023-09-04T06:05:50Z
2023-09-04T06:05:49Z
NONE
null
null
null
### Describe the bug When calling load_dataset, raise error ``` Traceback (most recent call last): File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1694, in _pre pare_split_single writer.write(example, key) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 490, in write self.write_examples_on_file() File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 448, in write_examples_on_file self.write_batch(batch_examples=batch_examples) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 559, in write_batch self.write_table(pa_table, writer_batch_size) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/arrow_writer.py", line 571, in write_table pa_table = pa_table.combine_chunks() ^^^^^^^^^^^^^^^^^^^^^^^^^ File "pyarrow/table.pxi", line 3439, in pyarrow.lib.Table.combine_chunks File "pyarrow/error.pxi", line 144, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 100, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays The above exception was the direct cause of the following exception: Traceback (most recent call last): dataset = load_dataset( ^^^^^^^^^^^^^ File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/load.py", line 2133, in load_da taset builder_instance.download_and_prepare( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 954, in downl oad_and_prepare self._download_and_prepare( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1717, in _dow nload_and_prepare super()._download_and_prepare( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1049, in _dow nload_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1555, in _pre pare_split for job_id, done, content in self._prepare_split_single( File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1712, in _pre pare_split_single raise DatasetGenerationError("An error occurred while generating the dataset") from e datasets.builder.DatasetGenerationError: An error occurred while generating the dataset Setting num_proc from 8 back to 1 for the train split to disable multiprocessing as it only contains one shard. 09/04/2023 12:02:04 - WARNING - datasets.builder - Setting num_proc from 8 back to 1 for the train split to dis able multiprocessing as it only contains one shard. ``` ### Steps to reproduce the bug Call load_dataset with the large image as feature ### Expected behavior no error ### Environment info - `datasets` version: 2.14.3 - Platform: Linux-6.2.0-31-generic-x86_64-with-glibc2.35 - Python version: 3.11.4 - Huggingface_hub version: 0.16.4 - PyArrow version: 12.0.1 - Pandas version: 2.0.3
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/6206/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/6206/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/4337
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/4337/comments
https://api.github.com/repos/huggingface/datasets/issues/4337/events
https://github.com/huggingface/datasets/pull/4337
1,234,470,083
PR_kwDODunzps43vuzF
4,337
Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR
{ "avatar_url": "https://avatars.githubusercontent.com/u/14205986?v=4", "events_url": "https://api.github.com/users/sashavor/events{/privacy}", "followers_url": "https://api.github.com/users/sashavor/followers", "following_url": "https://api.github.com/users/sashavor/following{/other_user}", "gists_url": "https://api.github.com/users/sashavor/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sashavor", "id": 14205986, "login": "sashavor", "node_id": "MDQ6VXNlcjE0MjA1OTg2", "organizations_url": "https://api.github.com/users/sashavor/orgs", "received_events_url": "https://api.github.com/users/sashavor/received_events", "repos_url": "https://api.github.com/users/sashavor/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sashavor/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sashavor/subscriptions", "type": "User", "url": "https://api.github.com/users/sashavor" }
[]
closed
false
null
[]
null
[ "Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.\r\n- **Quora** : Expected some content in section `Citation Information` but it is empty, missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sentiment140**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n\r\nThere are also some timeout errors, I don't really understand the source though :confused: ", "_The documentation is not available anymore as the PR was closed or merged._" ]
2022-05-12T20:52:02Z
2022-05-16T16:26:19Z
2022-05-16T16:18:30Z
NONE
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/4337.diff", "html_url": "https://github.com/huggingface/datasets/pull/4337", "merged_at": "2022-05-16T16:18:30Z", "patch_url": "https://github.com/huggingface/datasets/pull/4337.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/4337" }
Adding evaluation metadata for: - Reddit - Rotten Tomatoes - SemEval 2010 - Sentiment 140 - SMS Spam - Snips - SQuAD - SQuAD v2 - Timit ASR
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/4337/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/3253
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/3253/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/3253/comments
https://api.github.com/repos/huggingface/datasets/issues/3253/events
https://github.com/huggingface/datasets/issues/3253
1,051,308,972
I_kwDODunzps4-qbOs
3,253
`GeneratorBasedBuilder` does not support `None` values
{ "avatar_url": "https://avatars.githubusercontent.com/u/69010336?v=4", "events_url": "https://api.github.com/users/pavel-lexyr/events{/privacy}", "followers_url": "https://api.github.com/users/pavel-lexyr/followers", "following_url": "https://api.github.com/users/pavel-lexyr/following{/other_user}", "gists_url": "https://api.github.com/users/pavel-lexyr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/pavel-lexyr", "id": 69010336, "login": "pavel-lexyr", "node_id": "MDQ6VXNlcjY5MDEwMzM2", "organizations_url": "https://api.github.com/users/pavel-lexyr/orgs", "received_events_url": "https://api.github.com/users/pavel-lexyr/received_events", "repos_url": "https://api.github.com/users/pavel-lexyr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/pavel-lexyr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/pavel-lexyr/subscriptions", "type": "User", "url": "https://api.github.com/users/pavel-lexyr" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthanks for reporting and providing a minimal reproducible example. \r\n\r\nThis line of the PR I've linked in our discussion on the Forum will add support for `None` values:\r\nhttps://github.com/huggingface/datasets/blob/a53de01842aac65c66a49b2439e18fa93ff73ceb/src/datasets/features/features.py#L835\r\n\r\nI expect that PR to be merged soon." ]
2021-11-11T19:51:21Z
2021-12-09T14:26:58Z
2021-12-09T14:26:58Z
NONE
null
null
null
## Describe the bug `GeneratorBasedBuilder` does not support `None` values. ## Steps to reproduce the bug See [this repository](https://github.com/pavel-lexyr/huggingface-datasets-bug-reproduction) for minimal reproduction. ## Expected results Dataset is initialized with a `None` value in the `value` column. ## Actual results ``` Traceback (most recent call last): File "main.py", line 3, in <module> datasets.load_dataset("./bad-data") File ".../datasets/load.py", line 1632, in load_dataset builder_instance.download_and_prepare( File ".../datasets/builder.py", line 607, in download_and_prepare self._download_and_prepare( File ".../datasets/builder.py", line 697, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File ".../datasets/builder.py", line 1103, in _prepare_split example = self.info.features.encode_example(record) File ".../datasets/features/features.py", line 1033, in encode_example return encode_nested_example(self, example) File ".../datasets/features/features.py", line 808, in encode_nested_example return { File ".../datasets/features/features.py", line 809, in <dictcomp> k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) File ".../datasets/features/features.py", line 855, in encode_nested_example return schema.encode_example(obj) File ".../datasets/features/features.py", line 299, in encode_example return float(value) TypeError: float() argument must be a string or a number, not 'NoneType' ``` ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version: 1.15.1 - Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyArrow version: 6.0.0
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/3253/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/3253/timeline
null
completed
false
https://api.github.com/repos/huggingface/datasets/issues/2779
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2779/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2779/comments
https://api.github.com/repos/huggingface/datasets/issues/2779/events
https://github.com/huggingface/datasets/pull/2779
964,775,085
MDExOlB1bGxSZXF1ZXN0NzA3MTgwNTgw
2,779
Fix sacrebleu tokenizers
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova" }
[]
closed
false
null
[]
null
[]
2021-08-10T09:24:27Z
2021-08-10T11:03:08Z
2021-08-10T10:57:54Z
MEMBER
null
0
{ "diff_url": "https://github.com/huggingface/datasets/pull/2779.diff", "html_url": "https://github.com/huggingface/datasets/pull/2779", "merged_at": "2021-08-10T10:57:54Z", "patch_url": "https://github.com/huggingface/datasets/pull/2779.patch", "url": "https://api.github.com/repos/huggingface/datasets/pulls/2779" }
Last `sacrebleu` release (v2.0.0) has removed `sacrebleu.TOKENIZERS`: https://github.com/mjpost/sacrebleu/pull/152/files#diff-2553a315bb1f7e68c9c1b00d56eaeb74f5205aeb3a189bc3e527b122c6078795L17-R15 This PR makes a hot fix of the bug by using a private function in `sacrebleu`: `sacrebleu.metrics.bleu._get_tokenizer()`. Eventually, this should be further fixed in order to use only public functions. This is a partial hotfix of #2781.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2779/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2779/timeline
null
null
true
https://api.github.com/repos/huggingface/datasets/issues/5496
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5496/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5496/comments
https://api.github.com/repos/huggingface/datasets/issues/5496/events
https://github.com/huggingface/datasets/issues/5496
1,567,301,765
I_kwDODunzps5dayCF
5,496
Add a `reduce` method
{ "avatar_url": "https://avatars.githubusercontent.com/u/59542043?v=4", "events_url": "https://api.github.com/users/zhangir-azerbayev/events{/privacy}", "followers_url": "https://api.github.com/users/zhangir-azerbayev/followers", "following_url": "https://api.github.com/users/zhangir-azerbayev/following{/other_user}", "gists_url": "https://api.github.com/users/zhangir-azerbayev/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/zhangir-azerbayev", "id": 59542043, "login": "zhangir-azerbayev", "node_id": "MDQ6VXNlcjU5NTQyMDQz", "organizations_url": "https://api.github.com/users/zhangir-azerbayev/orgs", "received_events_url": "https://api.github.com/users/zhangir-azerbayev/received_events", "repos_url": "https://api.github.com/users/zhangir-azerbayev/repos", "site_admin": false, "starred_url": "https://api.github.com/users/zhangir-azerbayev/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/zhangir-azerbayev/subscriptions", "type": "User", "url": "https://api.github.com/users/zhangir-azerbayev" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi! Sure, feel free to open a PR, so we can see the API you have in mind.", "I would like to give it a go! #self-assign", "Closing as `Dataset.map` can be used instead (see https://github.com/huggingface/datasets/pull/5533#issuecomment-1440571658 and https://github.com/huggingface/datasets/pull/5533#issuecomment-1446403263)" ]
2023-02-02T04:30:22Z
2023-07-21T14:24:32Z
2023-07-21T14:24:32Z
NONE
null
null
null
### Feature request Right now the `Dataset` class implements `map()` and `filter()`, but leaves out the third functional idiom popular among Python users: `reduce`. ### Motivation A `reduce` method is often useful when calculating dataset statistics, for example, the occurrence of a particular n-gram or the average line length of a code dataset. ### Your contribution I haven't contributed to `datasets` before, but I don't expect this will be too difficult, since the implementation will closely follow that of `map` and `filter`. I could have a crack over the weekend.
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/5496/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/5496/timeline
null
completed
false