url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4340/comments | https://api.github.com/repos/huggingface/datasets/issues/4340/events | https://github.com/huggingface/datasets/pull/4340 | 1,234,671,025 | PR_kwDODunzps43wY1U | 4,340 | Fix irc_disentangle dataset script | [] | closed | false | null | 1 | 2022-05-13T02:37:57Z | 2022-05-24T15:37:30Z | 2022-05-24T15:37:29Z | null | updated extracted dataset's repo's latest commit hash (included in tarball's name), and updated the related data_infos.json | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4340",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4340"
} | true | [
"Thanks ! This has been fixed in https://github.com/huggingface/datasets/pull/4377, we can close this PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/433 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/433/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/433/comments | https://api.github.com/repos/huggingface/datasets/issues/433/events | https://github.com/huggingface/datasets/issues/433 | 665,311,025 | MDU6SXNzdWU2NjUzMTEwMjU= | 433 | How to reuse functionality of a (generic) dataset? | [] | closed | false | null | 4 | 2020-07-24T17:27:37Z | 2022-10-04T17:59:34Z | 2022-10-04T17:59:33Z | null | I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to reuse formats and loading functionality for datasets with a common format?
In my case, it took a bit of time to create the Brat dataset and I think others would appreciate to not have to think about that again. Also, I assume there are other formats (e.g. conll) that are widely used, so having this would really ease dataset onboarding and adoption of the library. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/433/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/433/timeline | null | completed | null | null | false | [
"Hi @ArneBinder, we have a few \"generic\" datasets which are intended to load data files with a predefined format:\r\n- csv: https://github.com/huggingface/nlp/tree/master/datasets/csv\r\n- json: https://github.com/huggingface/nlp/tree/master/datasets/json\r\n- text: https://github.com/huggingface/nlp/tree/master/... |
https://api.github.com/repos/huggingface/datasets/issues/5291 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5291/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5291/comments | https://api.github.com/repos/huggingface/datasets/issues/5291/events | https://github.com/huggingface/datasets/pull/5291 | 1,462,983,472 | PR_kwDODunzps5DoKNC | 5,291 | [build doc] for v2.7.1 & v2.6.2 | [] | closed | false | null | 2 | 2022-11-24T08:54:47Z | 2022-11-24T09:14:10Z | 2022-11-24T09:11:15Z | null | Do NOT merge. Using this PR to build docs for [v2.7.1](https://github.com/huggingface/datasets/pull/5291/commits/f4914af20700f611b9331a9e3ba34743bbeff934) & [v2.6.2](https://github.com/huggingface/datasets/pull/5291/commits/025f85300a0874eeb90a20393c62f25ac0accaa0) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5291/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5291/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5291.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5291",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5291.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5291"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"doc versions are built https://huggingface.co/docs/datasets/index"
] |
https://api.github.com/repos/huggingface/datasets/issues/2390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2390/comments | https://api.github.com/repos/huggingface/datasets/issues/2390/events | https://github.com/huggingface/datasets/pull/2390 | 897,903,642 | MDExOlB1bGxSZXF1ZXN0NjQ5ODQ0NjQ2 | 2,390 | Add check for task templates on dataset load | [] | closed | false | null | 1 | 2021-05-21T10:16:57Z | 2021-05-21T15:49:09Z | 2021-05-21T15:49:06Z | null | This PR adds a check that the features of a dataset match the schema of each compatible task template. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2390/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2390",
"merged_at": "2021-05-21T15:49:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2390"
} | true | [
"LGTM now, thank you =)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5931/comments | https://api.github.com/repos/huggingface/datasets/issues/5931/events | https://github.com/huggingface/datasets/issues/5931 | 1,745,408,784 | I_kwDODunzps5oCNMQ | 5,931 | `datasets.map` not reusing cached copy by default | [] | closed | false | null | 1 | 2023-06-07T09:03:33Z | 2023-06-21T16:15:40Z | 2023-06-21T16:15:40Z | null | ### Describe the bug
When I load the dataset from local directory, it's cached copy is picked up after first time. However, for `map` operation, the operation is applied again and cached copy is not picked up. Is there any way to pick cached copy instead of processing it again? The only solution I could think of was to use `save_to_disk` after my last transform and then use that in my DataLoader pipeline. Are there any other solutions for the same?
One more thing, my dataset is occupying 6GB storage memory after I use `map`, is there any way I can reduce that memory usage?
### Steps to reproduce the bug
```
# make sure that dataset decodes audio with correct sampling rate
dataset_sampling_rate = next(iter(self.raw_datasets.values())).features["audio"].sampling_rate
if dataset_sampling_rate != self.feature_extractor.sampling_rate:
self.raw_datasets = self.raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=self.feature_extractor.sampling_rate)
)
vectorized_datasets = self.raw_datasets.map(
self.prepare_dataset,
remove_columns=next(iter(self.raw_datasets.values())).column_names,
num_proc=self.num_workers,
desc="preprocess datasets",
)
# filter data that is longer than max_input_length
self.vectorized_datasets = vectorized_datasets.filter(
self.is_audio_in_length_range,
num_proc=self.num_workers,
input_columns=["input_length"],
)
def prepare_dataset(self, batch):
# load audio
sample = batch["audio"]
inputs = self.feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
batch["labels"] = self.tokenizer(batch["target_text"]).input_ids
return batch
```
### Expected behavior
`map` to use cached copy and if possible an alternative technique to reduce memory usage after using `map`
### Environment info
- `datasets` version: 2.12.0
- Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- Huggingface_hub version: 0.15.1
- PyArrow version: 12.0.0
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5931/timeline | null | completed | null | null | false | [
"This can happen when a map transform cannot be hashed deterministically (e.g., an object referenced by the transform changes its state after the first call - an issue with fast tokenizers). The solution is to provide `cache_file_name` in the `map` call to check this file for the cached result instead of relying on... |
https://api.github.com/repos/huggingface/datasets/issues/388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/388/comments | https://api.github.com/repos/huggingface/datasets/issues/388/events | https://github.com/huggingface/datasets/issues/388 | 656,707,497 | MDU6SXNzdWU2NTY3MDc0OTc= | 388 | 🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 5 | 2020-07-14T15:36:41Z | 2022-10-04T18:01:28Z | 2022-10-04T18:01:28Z | null | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18`
2. When trying to download `wmt17 zh-en`, I got the following error:
> ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/388/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/388/timeline | null | completed | null | null | false | [
"similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDow... |
https://api.github.com/repos/huggingface/datasets/issues/750 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/750/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/750/comments | https://api.github.com/repos/huggingface/datasets/issues/750/events | https://github.com/huggingface/datasets/issues/750 | 726,589,446 | MDU6SXNzdWU3MjY1ODk0NDY= | 750 | load_dataset doesn't include `features` in its hash | [] | closed | false | null | 0 | 2020-10-21T15:16:41Z | 2020-10-29T09:36:01Z | 2020-10-29T09:36:01Z | null | It looks like the function `load_dataset` does not include what's passed in the `features` argument when creating a hash for a given dataset. As a result, if a user includes new features from an already downloaded dataset, those are ignored.
Example: some models on the hub have a different ordering for the labels than what `datasets` uses for MNLI so I'd like to do something along the lines of:
```
dataset = load_dataset("glue", "mnli")
features = dataset["train"].features
features["label"] = ClassLabel(names = ['entailment', 'contradiction', 'neutral']) # new label order
dataset = load_dataset("glue", "mnli", features=features)
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/750/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/750/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5531 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5531/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5531/comments | https://api.github.com/repos/huggingface/datasets/issues/5531/events | https://github.com/huggingface/datasets/issues/5531 | 1,584,387,276 | I_kwDODunzps5eb9TM | 5,531 | Invalid Arrow data from JSONL | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 0 | 2023-02-14T15:39:49Z | 2023-02-14T15:46:09Z | null | null | This code fails:
```python
from datasets import Dataset
ds = Dataset.from_json(path_to_file)
ds.data.validate()
```
raises
```python
ArrowInvalid: Column 2: In chunk 1: Invalid: Struct child array #3 invalid: Invalid: Length spanned by list offsets (4064) larger than values array (length 4063)
```
This causes many issues for @TevenLeScao:
- `map` fails because it fails to rewrite invalid arrow arrays
```python
~/Desktop/hf/datasets/src/datasets/arrow_writer.py in write_examples_on_file(self)
438 if all(isinstance(row[0][col], (pa.Array, pa.ChunkedArray)) for row in self.current_examples):
439 arrays = [row[0][col] for row in self.current_examples]
--> 440 batch_examples[col] = array_concat(arrays)
441 else:
442 batch_examples[col] = [
~/Desktop/hf/datasets/src/datasets/table.py in array_concat(arrays)
1885
1886 if not _is_extension_type(array_type):
-> 1887 return pa.concat_arrays(arrays)
1888
1889 def _offsets_concat(offsets):
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.concat_arrays()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowIndexError: array slice would exceed array length
```
- `to_dict()` **segfaults** ⚠️
```python
/Users/runner/work/crossbow/crossbow/arrow/cpp/src/arrow/array/data.cc:99: Check failed: (off) <= (length) Slice offset greater
than array length
```
To reproduce: unzip the archive and run the above code using `sanity_oscar_en.jsonl`
[sanity_oscar_en.jsonl.zip](https://github.com/huggingface/datasets/files/10734124/sanity_oscar_en.jsonl.zip)
PS: reading using pandas and converting to Arrow works though (note that the dataset lives in RAM in this case):
```python
ds = Dataset.from_pandas(pd.read_json(path_to_file, lines=True))
ds.data.validate()
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5531/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5531/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2210/comments | https://api.github.com/repos/huggingface/datasets/issues/2210/events | https://github.com/huggingface/datasets/issues/2210 | 855,709,400 | MDU6SXNzdWU4NTU3MDk0MDA= | 2,210 | dataloading slow when using HUGE dataset | [] | closed | false | null | 2 | 2021-04-12T08:33:02Z | 2021-04-13T02:03:05Z | 2021-04-13T02:03:05Z | null | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch-lightning supported profile of two different runs, I see that fetching a batch(`get_train_batch`) consumes an unreasonable amount of time when data is large. What could be the cause?
* 60GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 200.33 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 71.994 |1 | 71.994 | 35.937 |
run_training_batch | 0.64373 |100 | 64.373 | 32.133 |
optimizer_step_and_closure_0 | 0.64322 |100 | 64.322 | 32.108 |
training_step_and_backward | 0.61004 |100 | 61.004 | 30.452 |
model_backward | 0.37552 |100 | 37.552 | 18.745 |
model_forward | 0.22813 |100 | 22.813 | 11.387 |
training_step | 0.22759 |100 | 22.759 | 11.361 |
get_train_batch | 0.066385 |100 | 6.6385 | 3.3138 |
```
* 600GB data
```
Action | Mean duration (s) |Num calls | Total time (s) | Percentage % |
------------------------------------------------------------------------------------------------------------------------------------
Total | - |_ | 3285.6 | 100 % |
------------------------------------------------------------------------------------------------------------------------------------
run_training_epoch | 1397.9 |1 | 1397.9 | 42.546 |
run_training_batch | 7.2596 |100 | 725.96 | 22.095 |
optimizer_step_and_closure_0 | 7.2589 |100 | 725.89 | 22.093 |
training_step_and_backward | 7.223 |100 | 722.3 | 21.984 |
model_backward | 6.9662 |100 | 696.62 | 21.202 |
get_train_batch | 6.322 |100 | 632.2 | 19.241 |
model_forward | 0.24902 |100 | 24.902 | 0.75789 |
training_step | 0.2485 |100 | 24.85 | 0.75633 |
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2210/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2210/timeline | null | completed | null | null | false | [
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] |
https://api.github.com/repos/huggingface/datasets/issues/5698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5698/comments | https://api.github.com/repos/huggingface/datasets/issues/5698/events | https://github.com/huggingface/datasets/issues/5698 | 1,652,183,611 | I_kwDODunzps5ielI7 | 5,698 | Add Qdrant as another search index | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2023-04-03T14:25:19Z | 2023-04-11T10:28:40Z | null | null | ### Feature request
I'd suggest adding Qdrant (https://qdrant.tech) as another search index available, so users can directly build an index from a dataset. Currently, FAISS and ElasticSearch are only supported: https://huggingface.co/docs/datasets/faiss_es
### Motivation
ElasticSearch is a keyword-based search system, while FAISS is a vector search library. Vector database, such as Qdrant, is a different tool based on similarity (like FAISS) but is not limited to a single machine. It makes the vector database well-suited for bigger datasets and collaboration if several people want to access a particular dataset.
### Your contribution
I can provide a PR implementing that functionality on my own. | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5698/timeline | null | null | null | null | false | [
"@mariosasko I'd appreciate your feedback on this. "
] |
https://api.github.com/repos/huggingface/datasets/issues/4337 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4337/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4337/comments | https://api.github.com/repos/huggingface/datasets/issues/4337/events | https://github.com/huggingface/datasets/pull/4337 | 1,234,470,083 | PR_kwDODunzps43vuzF | 4,337 | Eval metadata batch 3: Reddit, Rotten Tomatoes, SemEval 2010, Sentiment 140, SMS Spam, Snips, SQuAD, SQuAD v2, Timit ASR | [] | closed | false | null | 2 | 2022-05-12T20:52:02Z | 2022-05-16T16:26:19Z | 2022-05-16T16:18:30Z | null | Adding evaluation metadata for:
- Reddit
- Rotten Tomatoes
- SemEval 2010
- Sentiment 140
- SMS Spam
- Snips
- SQuAD
- SQuAD v2
- Timit ASR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4337/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4337/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4337.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4337",
"merged_at": "2022-05-16T16:18:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4337.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4337"
} | true | [
"Summary of CircleCI errors:\r\n\r\n- **sem_eval_2010_task_8**: missing 8 required positional arguments: 'annotations_creators', 'language_creators', 'licenses', 'multilinguality', 'size_categories', 'source_datasets', 'task_categories', and 'task_ids'\r\n- **sms_spam**: `Data Instances` and`Data Splits` are empty.... |
https://api.github.com/repos/huggingface/datasets/issues/2407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2407/comments | https://api.github.com/repos/huggingface/datasets/issues/2407/events | https://github.com/huggingface/datasets/issues/2407 | 903,111,755 | MDU6SXNzdWU5MDMxMTE3NTU= | 2,407 | .map() function got an unexpected keyword argument 'cache_file_name' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-05-27T01:54:26Z | 2021-05-27T13:46:40Z | 2021-05-27T13:46:40Z | null | ## Describe the bug
I'm trying to save the result of datasets.map() to a specific file, so that I can easily share it among multiple computers without reprocessing the dataset. However, when I try to pass an argument 'cache_file_name' to the .map() function, it throws an error that ".map() function got an unexpected keyword argument 'cache_file_name'".
I believe I'm using the latest dataset 1.6.2. Also seems like the document and the actual code indicates there is an argument 'cache_file_name' for the .map() function.
Here is the code I use
## Steps to reproduce the bug
```datasets = load_from_disk(dataset_path=my_path)
[...]
def tokenize_function(examples):
return tokenizer(examples[text_column_name])
logger.info("Mapping dataset to tokenized dataset.")
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
num_proc=preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=True,
cache_file_name="my_tokenized_file"
)
```
## Actual results
tokenized_datasets = datasets.map(
TypeError: map() got an unexpected keyword argument 'cache_file_name'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:1.6.2
- Platform:Linux-4.18.0-193.28.1.el8_2.x86_64-x86_64-with-glibc2.10
- Python version:3.8.5
- PyArrow version:3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2407/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2407/timeline | null | completed | null | null | false | [
"Hi @cindyxinyiwang,\r\nDid you try adding `.arrow` after `cache_file_name` argument? Here I think they're expecting something like that only for a cache file:\r\nhttps://github.com/huggingface/datasets/blob/e08362256fb157c0b3038437fc0d7a0bbb50de5c/src/datasets/arrow_dataset.py#L1556-L1558",
"Hi ! `cache_file_nam... |
https://api.github.com/repos/huggingface/datasets/issues/3103 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3103/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3103/comments | https://api.github.com/repos/huggingface/datasets/issues/3103/events | https://github.com/huggingface/datasets/pull/3103 | 1,029,069,310 | PR_kwDODunzps4tUzJQ | 3,103 | Fix project description in PyPI | [] | closed | false | null | 0 | 2021-10-18T12:47:29Z | 2021-10-18T12:59:57Z | 2021-10-18T12:59:56Z | null | Fix project description appearing in PyPI, so that it contains the content of the README.md file (like transformers).
Currently, `datasets` project description appearing in PyPI shows the release instructions addressed to core maintainers: https://pypi.org/project/datasets/1.13.3/
Fix #3102. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3103/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3103/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3103.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3103",
"merged_at": "2021-10-18T12:59:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3103.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3103"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1080/comments | https://api.github.com/repos/huggingface/datasets/issues/1080/events | https://github.com/huggingface/datasets/pull/1080 | 756,663,464 | MDExOlB1bGxSZXF1ZXN0NTMyMTc3NDg5 | 1,080 | Add WikiANN NER dataset | [] | closed | false | null | 1 | 2020-12-03T23:09:24Z | 2020-12-06T17:18:55Z | 2020-12-06T17:18:55Z | null | This PR adds the full set of 176 languages from the balanced train/dev/test splits of WikiANN / PAN-X from: https://github.com/afshinrahimi/mmner
Until now, only 40 of these languages were available in `datasets` as part of the XTREME benchmark
Courtesy of the dataset author, we can now download this dataset from a Dropbox URL without needing a manual download anymore 🥳, so at some point it would be worth updating the PAN-X subset of XTREME as well 😄
Link to gist with some snippets for producing dummy data: https://gist.github.com/lewtun/5b93294ab6dbcf59d1493dbe2cfd6bb9
P.S. @yjernite I think I was confused about needing to generate a set of YAML tags per config, so ended up just adding a single one in the README. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1080/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1080.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1080",
"merged_at": "2020-12-06T17:18:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1080.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1080"
} | true | [
"Dataset card added, so ready for review!"
] |
https://api.github.com/repos/huggingface/datasets/issues/3652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3652/comments | https://api.github.com/repos/huggingface/datasets/issues/3652/events | https://github.com/huggingface/datasets/pull/3652 | 1,118,808,738 | PR_kwDODunzps4xzinr | 3,652 | sp. Columbia => Colombia | [] | closed | false | null | 2 | 2022-01-31T00:41:03Z | 2022-02-09T16:55:25Z | 2022-01-31T08:29:07Z | null | "Columbia" is various places in North America. The country is "Colombia". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3652/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3652",
"merged_at": "2022-01-31T08:29:07Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3652"
} | true | [
"The original openslr site mixed both names https://openslr.org/72/ :-)",
"Yeah, I filed the issue to have it fixed there last year, but it looks like they missed a few."
] |
https://api.github.com/repos/huggingface/datasets/issues/739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/739/comments | https://api.github.com/repos/huggingface/datasets/issues/739/events | https://github.com/huggingface/datasets/pull/739 | 723,044,066 | MDExOlB1bGxSZXF1ZXN0NTA0Njk5NTY3 | 739 | Add wiki dpr multiset embeddings | [] | closed | false | null | 3 | 2020-10-16T09:05:49Z | 2020-11-26T14:02:50Z | 2020-11-26T14:02:49Z | null | There are two DPR encoders, one trained on Natural Questions and one trained on a multiset/hybrid dataset.
Previously only the embeddings from the encoder trained on NQ were available. I'm adding the ones from the encoder trained on the multiset/hybrid dataset.
In the configuration you can now specify `embeddings_name="nq"` or `embeddings_name="multiset"` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/739/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/739.diff",
"html_url": "https://github.com/huggingface/datasets/pull/739",
"merged_at": "2020-11-26T14:02:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/739.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/739"
} | true | [
"I still have to compute the dataset_infos, and build + host the indexes",
"update: I'm computing the metadata, will update the PR soon",
"Finally all green and ready to merge :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/334 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/334/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/334/comments | https://api.github.com/repos/huggingface/datasets/issues/334/events | https://github.com/huggingface/datasets/pull/334 | 649,661,791 | MDExOlB1bGxSZXF1ZXN0NDQzMjk1NjQ0 | 334 | Add dataset.shard() method | [] | closed | false | null | 1 | 2020-07-02T06:05:19Z | 2020-07-06T12:35:36Z | 2020-07-06T12:35:36Z | null | Fixes https://github.com/huggingface/nlp/issues/312 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/334/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/334/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/334.diff",
"html_url": "https://github.com/huggingface/datasets/pull/334",
"merged_at": "2020-07-06T12:35:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/334.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/334"
} | true | [
"Great, done!"
] |
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | [] | closed | false | null | 4 | 2020-10-11T23:12:12Z | 2020-10-14T17:00:12Z | 2020-10-14T17:00:12Z | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had the links marked up, the new one is just a jumble of text in one chunk and no markup for links (i.e. not clickable). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | completed | null | null | false | [
"Should be fixed now: \r\n\r\n\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* htt... |
https://api.github.com/repos/huggingface/datasets/issues/4931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4931/comments | https://api.github.com/repos/huggingface/datasets/issues/4931/events | https://github.com/huggingface/datasets/pull/4931 | 1,362,298,764 | PR_kwDODunzps4-Y3L6 | 4,931 | Fix missing tags in dataset cards | [] | closed | false | null | 1 | 2022-09-05T17:03:04Z | 2022-09-22T12:40:15Z | 2022-09-06T05:39:29Z | null | Fix missing tags in dataset cards:
- coqa
- hyperpartisan_news_detection
- opinosis
- scientific_papers
- scifact
- search_qa
- wiki_qa
- wiki_split
- wikisql
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4931/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4931",
"merged_at": "2022-09-06T05:39:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4931"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2084 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2084/comments | https://api.github.com/repos/huggingface/datasets/issues/2084/events | https://github.com/huggingface/datasets/issues/2084 | 835,750,671 | MDU6SXNzdWU4MzU3NTA2NzE= | 2,084 | CUAD - Contract Understanding Atticus Dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2021-03-19T09:27:43Z | 2021-04-16T08:50:44Z | 2021-04-16T08:50:44Z | null | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** https://github.com/TheAtticusProject/cuad/
- **Motivation:** good domain specific datasets are valuable
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2084/timeline | null | completed | null | null | false | [
"+1 on this request"
] |
https://api.github.com/repos/huggingface/datasets/issues/2450 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2450/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2450/comments | https://api.github.com/repos/huggingface/datasets/issues/2450/events | https://github.com/huggingface/datasets/issues/2450 | 912,890,291 | MDU6SXNzdWU5MTI4OTAyOTE= | 2,450 | BLUE file not found | [] | closed | false | null | 2 | 2021-06-06T17:01:54Z | 2021-06-07T10:46:15Z | 2021-06-07T10:46:15Z | null | Hi, I'm having the following issue when I try to load the `blue` metric.
```shell
import datasets
metric = datasets.load_metric('blue')
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 320, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 332, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 291, in cached_path
use_auth_token=download_config.use_auth_token,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/utils/file_utils.py", line 621, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/metrics/blue/blue.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 605, in load_metric
dataset=False,
File "/home/irfan/environments/Perplexity_Transformers/lib/python3.6/site-packages/datasets/load.py", line 343, in prepare_module
combined_path, github_file_path
FileNotFoundError: Couldn't find file locally at blue/blue.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.7.0/metrics/blue/blue.py.
The file is also not present on the master branch on github.
```
Here is dataset installed version info
```shell
pip freeze | grep datasets
datasets==1.7.0
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2450/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2450/timeline | null | completed | null | null | false | [
"Hi ! The `blue` metric doesn't exist, but the `bleu` metric does.\r\nYou can get the full list of metrics [here](https://github.com/huggingface/datasets/tree/master/metrics) or by running\r\n```python\r\nfrom datasets import list_metrics\r\n\r\nprint(list_metrics())\r\n```",
"Ah, my mistake. Thanks for correctin... |
https://api.github.com/repos/huggingface/datasets/issues/5552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5552/comments | https://api.github.com/repos/huggingface/datasets/issues/5552/events | https://github.com/huggingface/datasets/pull/5552 | 1,592,186,703 | PR_kwDODunzps5KXMjA | 5,552 | Make tiktoken tokenizers hashable | [] | closed | false | null | 4 | 2023-02-20T16:50:09Z | 2023-02-21T13:20:42Z | 2023-02-21T13:13:05Z | null | Fix for https://discord.com/channels/879548962464493619/1075729627546406912/1075729627546406912
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5552/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5552",
"merged_at": "2023-02-21T13:13:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5552"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4099 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4099/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4099/comments | https://api.github.com/repos/huggingface/datasets/issues/4099/events | https://github.com/huggingface/datasets/issues/4099 | 1,193,253,768 | I_kwDODunzps5HH5uI | 4,099 | UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128) | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2022-04-05T14:42:38Z | 2022-04-06T06:37:44Z | 2022-04-06T06:35:54Z | null | ## Describe the bug
Error "UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)" is thrown when downloading dataset.
## Steps to reproduce the bug
```python
from datasets import load_dataset
datasets = load_dataset("nielsr/XFUN", "xfun.ja")
```
## Expected results
Dataset should be downloaded without exceptions
## Actual results
Stack trace (for the second-time execution):
Downloading and preparing dataset xfun/xfun.ja to /root/.cache/huggingface/datasets/nielsr___xfun/xfun.ja/0.0.0/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477...
Downloading data files: 100%
2/2 [00:00<00:00, 88.48it/s]
Extracting data files: 100%
2/2 [00:00<00:00, 79.60it/s]
UnicodeDecodeErrorTraceback (most recent call last)
<ipython-input-31-79c26bd1109c> in <module>
1 from datasets import load_dataset
2
----> 3 datasets = load_dataset("nielsr/XFUN", "xfun.ja")
/usr/local/lib/python3.6/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, **config_kwargs)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs)
604 )
605
--> 606 # By default, return all splits
607 if split is None:
608 split = {s: s for s in self.info.splits}
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos)
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs)
692 Args:
693 split: `datasets.Split` which subset of the data to read.
--> 694
695 Returns:
696 `Dataset`
/usr/local/lib/python3.6/dist-packages/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys)
/usr/local/lib/python3.6/dist-packages/tqdm/notebook.py in __iter__(self)
252 if not self.disable:
253 self.display(check_delay=False)
--> 254
255 def __iter__(self):
256 try:
/usr/local/lib/python3.6/dist-packages/tqdm/std.py in __iter__(self)
1183 for obj in iterable:
1184 yield obj
-> 1185 return
1186
1187 mininterval = self.mininterval
~/.cache/huggingface/modules/datasets_modules/datasets/nielsr--XFUN/e06e948b673d1be9a390a83c05c10e49438bf03dd85ae9a4fe06f8747a724477/XFUN.py in _generate_examples(self, filepaths)
140 logger.info("Generating examples from = %s", filepath)
141 with open(filepath[0], "r") as f:
--> 142 data = json.load(f)
143
144 for doc in data["documents"]:
/usr/lib/python3.6/json/__init__.py in load(fp, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
294
295 """
--> 296 return loads(fp.read(),
297 cls=cls, object_hook=object_hook,
298 parse_float=parse_float, parse_int=parse_int,
/usr/lib/python3.6/encodings/ascii.py in decode(self, input, final)
24 class IncrementalDecoder(codecs.IncrementalDecoder):
25 def decode(self, input, final=False):
---> 26 return codecs.ascii_decode(input, self.errors)[0]
27
28 class StreamWriter(Codec,codecs.StreamWriter):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe5 in position 213: ordinal not in range(128)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.0.0 (but reproduced with many previous versions)
- Platform: Docker: Linux da5b74136d6b 5.3.0-1031-azure #32~18.04.1-Ubuntu SMP Mon Jun 22 15:27:23 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux ; Base docker image is : huggingface/transformers-pytorch-cpu
- Python version: 3.6.9
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4099/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4099/timeline | null | completed | null | null | false | [
"Hi @andreybond, thanks for reporting.\r\n\r\nUnfortunately, I'm not able to able to reproduce your issue:\r\n```python\r\nIn [4]: from datasets import load_dataset\r\n ...: datasets = load_dataset(\"nielsr/XFUN\", \"xfun.ja\")\r\n\r\nIn [5]: datasets\r\nOut[5]: \r\nDatasetDict({\r\n train: Dataset({\r\n ... |
https://api.github.com/repos/huggingface/datasets/issues/5978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5978/comments | https://api.github.com/repos/huggingface/datasets/issues/5978/events | https://github.com/huggingface/datasets/pull/5978 | 1,770,187,053 | PR_kwDODunzps5Tru2_ | 5,978 | Release: 2.13.1 | [] | closed | false | null | 4 | 2023-06-22T18:23:11Z | 2023-06-22T18:40:24Z | 2023-06-22T18:30:16Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5978/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5978.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5978",
"merged_at": "2023-06-22T18:30:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5978.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5978"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5294 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5294/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5294/comments | https://api.github.com/repos/huggingface/datasets/issues/5294/events | https://github.com/huggingface/datasets/pull/5294 | 1,463,679,582 | PR_kwDODunzps5DqgLW | 5,294 | Support streaming datasets with pathlib.Path.with_suffix | [] | closed | false | null | 1 | 2022-11-24T18:04:38Z | 2022-11-29T07:09:08Z | 2022-11-29T07:06:32Z | null | This PR extends the support in streaming mode for datasets that use `pathlib.Path.with_suffix`.
Fix #5293. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5294/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5294/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5294.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5294",
"merged_at": "2022-11-29T07:06:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5294.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5294"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4063 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4063/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4063/comments | https://api.github.com/repos/huggingface/datasets/issues/4063/events | https://github.com/huggingface/datasets/pull/4063 | 1,186,611,368 | PR_kwDODunzps41UiDm | 4,063 | Increase max retries for GitHub metrics | [] | closed | false | null | 1 | 2022-03-30T15:12:48Z | 2022-03-31T14:42:52Z | 2022-03-31T14:37:47Z | null | As GitHub recurrently raises connectivity issues, this PR increases the number of max retries to request GitHub metrics.
Related to:
- #3134
Also related to:
- #4059 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4063/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4063/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4063.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4063",
"merged_at": "2022-03-31T14:37:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4063.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4063"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3468/comments | https://api.github.com/repos/huggingface/datasets/issues/3468/events | https://github.com/huggingface/datasets/pull/3468 | 1,085,871,301 | PR_kwDODunzps4wIozO | 3,468 | Add COCO dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 7 | 2021-12-21T14:07:50Z | 2022-10-03T09:38:07Z | 2022-10-03T09:36:08Z | null | This PR adds the MS COCO dataset. Compared to the [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/object_detection/coco.py) script, this implementation adds 8 additional configs to cover the tasks other than object detection.
Some notes:
* the data exposed by TFDS is contained in the `2014`, `2015`, `2017` and `2017_panoptic_segmentation` configs here
* I've updated `encode_nested_example` for easier handling of missing values (cc @lhoestq @albertvillanova; will add tests if you are OK with the changes in `features.py`)
* this implementation should fix https://github.com/huggingface/datasets/pull/3377#issuecomment-985559427
TODOs:
- [x] dataset card
- [ ] dummy data
cc @merveenoyan
Closes #2526 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3468/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3468",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3468"
} | true | [
"The CI failures other than a missing dummy data file and missing fields in the card are unrelated to this PR. ",
"Thanks a lot for this great work and fixing TFDS based script @mariosasko 🤗 will generate the dummy dataset and write the model card tomorrow!",
"@mariosasko I added the dataset card, I'm on the d... |
https://api.github.com/repos/huggingface/datasets/issues/2941 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2941/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2941/comments | https://api.github.com/repos/huggingface/datasets/issues/2941/events | https://github.com/huggingface/datasets/issues/2941 | 1,000,000,711 | I_kwDODunzps47mszH | 2,941 | OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "2edb81",
"default": false,
"descrip... | open | false | null | 1 | 2021-09-18T10:39:13Z | 2022-01-19T14:10:07Z | null | null | ## Describe the bug
Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`.
## Steps to reproduce the bug
```python
>>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko')
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num_examples=7345075, dataset_name='oscar'), 'recorded': SplitInfo(name='train', num_bytes=25284578514, num_examples=7344907, dataset_name='oscar')}]
```
## Expected results
Loading is successful.
## Actual results
Loading throws above error.
## Environment info
- `datasets` version: 1.12.1
- Platform: Linux-5.4.0-81-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2941/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2941/timeline | null | null | null | null | false | [
"I tried `unshuffled_original_da` and it is also not working"
] |
https://api.github.com/repos/huggingface/datasets/issues/1931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1931/comments | https://api.github.com/repos/huggingface/datasets/issues/1931/events | https://github.com/huggingface/datasets/pull/1931 | 814,225,074 | MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5 | 1,931 | add m_lama (multilingual lama) dataset | [] | closed | false | null | 3 | 2021-02-23T08:11:57Z | 2021-03-01T10:01:03Z | 2021-03-01T10:01:03Z | null | Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1931/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1931/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1931.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1931",
"merged_at": "2021-03-01T10:01:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1931.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1931"
} | true | [
"Hi, it seems I am somewhat stuck here. The failed test `ci/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to... |
https://api.github.com/repos/huggingface/datasets/issues/2608 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2608/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2608/comments | https://api.github.com/repos/huggingface/datasets/issues/2608/events | https://github.com/huggingface/datasets/pull/2608 | 938,897,626 | MDExOlB1bGxSZXF1ZXN0Njg1MjAwMDYw | 2,608 | Support streaming JSON files | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 0 | 2021-07-07T13:30:22Z | 2021-07-12T14:12:31Z | 2021-07-08T16:08:41Z | null | Use open in JSON dataset builder, so that it can be patched with xopen for streaming.
Close #2607. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2608/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2608/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2608.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2608",
"merged_at": "2021-07-08T16:08:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2608.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2608"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2121/comments | https://api.github.com/repos/huggingface/datasets/issues/2121/events | https://github.com/huggingface/datasets/pull/2121 | 842,148,633 | MDExOlB1bGxSZXF1ZXN0NjAxNzc4NDc4 | 2,121 | Add Validation For README | [] | closed | false | null | 7 | 2021-03-26T17:02:17Z | 2021-05-10T13:17:18Z | 2021-05-10T09:41:41Z | null | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
```json
{
"name": "./datasets/fashion_mnist/README.md",
"attributes": "",
"subsections": [
{
"name": "Dataset Card for FashionMNIST",
"attributes": "",
"subsections": [
{
"name": "Table of Contents",
"attributes": "- [Dataset Description](#dataset-description)\n - [Dataset Summary](#dataset-summary)\n - [Supported Tasks](#supported-tasks-and-leaderboards)\n - [Languages](#languages)\n- [Dataset Structure](#dataset-structure)\n - [Data Instances](#data-instances)\n - [Data Fields](#data-instances)\n - [Data Splits](#data-instances)\n- [Dataset Creation](#dataset-creation)\n - [Curation Rationale](#curation-rationale)\n - [Source Data](#source-data)\n - [Annotations](#annotations)\n - [Personal and Sensitive Information](#personal-and-sensitive-information)\n- [Considerations for Using the Data](#considerations-for-using-the-data)\n - [Social Impact of Dataset](#social-impact-of-dataset)\n - [Discussion of Biases](#discussion-of-biases)\n - [Other Known Limitations](#other-known-limitations)\n- [Additional Information](#additional-information)\n - [Dataset Curators](#dataset-curators)\n - [Licensing Information](#licensing-information)\n - [Citation Information](#citation-information)\n - [Contributions](#contributions)",
"subsections": []
},
{
"name": "Dataset Description",
"attributes": "- **Homepage:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Repository:** [GitHub](https://github.com/zalandoresearch/fashion-mnist)\n- **Paper:** [arXiv](https://arxiv.org/pdf/1708.07747.pdf)\n- **Leaderboard:**\n- **Point of Contact:**",
"subsections": [
{
"name": "Dataset Summary",
"attributes": "Fashion-MNIST is a dataset of Zalando's article images\u2014consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. We intend Fashion-MNIST to serve as a direct drop-in replacement for the original MNIST dataset for benchmarking machine learning algorithms. It shares the same image size and structure of training and testing splits.",
"subsections": []
},
{
"name": "Supported Tasks and Leaderboards",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Languages",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Dataset Structure",
"attributes": "",
"subsections": [
{
"name": "Data Instances",
"attributes": "A data point comprises an image and its label.",
"subsections": []
},
{
"name": "Data Fields",
"attributes": "- `image`: a 2d array of integers representing the 28x28 image.\n- `label`: an integer between 0 and 9 representing the classes with the following mapping:\n | Label | Description |\n | --- | --- |\n | 0 | T-shirt/top |\n | 1 | Trouser |\n | 2 | Pullover |\n | 3 | Dress |\n | 4 | Coat |\n | 5 | Sandal |\n | 6 | Shirt |\n | 7 | Sneaker |\n | 8 | Bag |\n | 9 | Ankle boot |",
"subsections": []
},
{
"name": "Data Splits",
"attributes": "The data is split into training and test set. The training set contains 60,000 images and the test set 10,000 images.",
"subsections": []
}
]
},
{
"name": "Dataset Creation",
"attributes": "",
"subsections": [
{
"name": "Curation Rationale",
"attributes": "**From the arXiv paper:**\nThe original MNIST dataset contains a lot of handwritten digits. Members of the AI/ML/Data Science community love this dataset and use it as a benchmark to validate their algorithms. In fact, MNIST is often the first dataset researchers try. \"If it doesn't work on MNIST, it won't work at all\", they said. \"Well, if it does work on MNIST, it may still fail on others.\"\nHere are some good reasons:\n- MNIST is too easy. Convolutional nets can achieve 99.7% on MNIST. Classic machine learning algorithms can also achieve 97% easily. Check out our side-by-side benchmark for Fashion-MNIST vs. MNIST, and read \"Most pairs of MNIST digits can be distinguished pretty well by just one pixel.\"\n- MNIST is overused. In this April 2017 Twitter thread, Google Brain research scientist and deep learning expert Ian Goodfellow calls for people to move away from MNIST.\n- MNIST can not represent modern CV tasks, as noted in this April 2017 Twitter thread, deep learning expert/Keras author Fran\u00e7ois Chollet.",
"subsections": []
},
{
"name": "Source Data",
"attributes": "",
"subsections": [
{
"name": "Initial Data Collection and Normalization",
"attributes": "**From the arXiv paper:**\nFashion-MNIST is based on the assortment on Zalando\u2019s website. Every fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit. The original picture has a light-gray background (hexadecimal color: #fdfdfd) and stored in 762 \u00d7 1000 JPEG format. For efficiently serving different frontend components, the original picture is resampled with multiple resolutions, e.g. large, medium, small, thumbnail and tiny.\nWe use the front look thumbnail images of 70,000 unique products to build Fashion-MNIST. Those products come from different gender groups: men, women, kids and neutral. In particular, whitecolor products are not included in the dataset as they have low contrast to the background. The thumbnails (51 \u00d7 73) are then fed into the following conversion pipeline:\n1. Converting the input to a PNG image.\n2. Trimming any edges that are close to the color of the corner pixels. The \u201ccloseness\u201d is defined by the distance within 5% of the maximum possible intensity in RGB space.\n3. Resizing the longest edge of the image to 28 by subsampling the pixels, i.e. some rows and columns are skipped over.\n4. Sharpening pixels using a Gaussian operator of the radius and standard deviation of 1.0, with increasing effect near outlines.\n5. Extending the shortest edge to 28 and put the image to the center of the canvas.\n6. Negating the intensities of the image.\n7. Converting the image to 8-bit grayscale pixels.",
"subsections": []
},
{
"name": "Who are the source image producers?",
"attributes": "**From the arXiv paper:**\nEvery fashion product on Zalando has a set of pictures shot by professional photographers, demonstrating different aspects of the product, i.e. front and back looks, details, looks with model and in an outfit.",
"subsections": []
}
]
},
{
"name": "Annotations",
"attributes": "",
"subsections": [
{
"name": "Annotation process",
"attributes": "**From the arXiv paper:**\nFor the class labels, they use the silhouette code of the product. The silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando. Each product Zalando is the Europe\u2019s largest online fashion platform. Each product contains only one silhouette code.",
"subsections": []
},
{
"name": "Who are the annotators?",
"attributes": "**From the arXiv paper:**\nThe silhouette code is manually labeled by the in-house fashion experts and reviewed by a separate team at Zalando.",
"subsections": []
}
]
},
{
"name": "Personal and Sensitive Information",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Considerations for Using the Data",
"attributes": "",
"subsections": [
{
"name": "Social Impact of Dataset",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Discussion of Biases",
"attributes": "[More Information Needed]",
"subsections": []
},
{
"name": "Other Known Limitations",
"attributes": "[More Information Needed]",
"subsections": []
}
]
},
{
"name": "Additional Information",
"attributes": "",
"subsections": [
{
"name": "Dataset Curators",
"attributes": "Han Xiao and Kashif Rasul and Roland Vollgraf",
"subsections": []
},
{
"name": "Licensing Information",
"attributes": "MIT Licence",
"subsections": []
},
{
"name": "Citation Information",
"attributes": "@article{DBLP:journals/corr/abs-1708-07747,\n author = {Han Xiao and\n Kashif Rasul and\n Roland Vollgraf},\n title = {Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning\n Algorithms},\n journal = {CoRR},\n volume = {abs/1708.07747},\n year = {2017},\n url = {http://arxiv.org/abs/1708.07747},\n archivePrefix = {arXiv},\n eprint = {1708.07747},\n timestamp = {Mon, 13 Aug 2018 16:47:27 +0200},\n biburl = {https://dblp.org/rec/bib/journals/corr/abs-1708-07747},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}",
"subsections": []
},
{
"name": "Contributions",
"attributes": "Thanks to [@gchhablani](https://github.com/gchablani) for adding this dataset.",
"subsections": []
}
]
}
]
}
]
}
```
Thanks,
Gunjan | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2121/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"merged_at": "2021-05-10T09:41:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2121"
} | true | [
"Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section/subsect... |
https://api.github.com/repos/huggingface/datasets/issues/5180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5180/comments | https://api.github.com/repos/huggingface/datasets/issues/5180/events | https://github.com/huggingface/datasets/issues/5180 | 1,431,012,438 | I_kwDODunzps5VS4RW | 5,180 | An example or recommendations for creating large image datasets? | [] | open | false | null | 2 | 2022-11-01T07:38:38Z | 2022-11-02T10:17:11Z | null | null | I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do?
As a user, I was wondering if we have this support for creating large image datasets. If so, we should mention that [here](https://huggingface.co/docs/datasets/image_dataset).
Cc @lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5180/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5180/timeline | null | null | null | null | false | [
"The beam utilities allow to prepare a dataset as parquet in your cloud storage. From my perspective this CLI is not super easy to use, but we've been working on a new python API to prepare a dataset in your cloud storage:\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nbuilder = load_dataset_build... |
https://api.github.com/repos/huggingface/datasets/issues/1744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1744/comments | https://api.github.com/repos/huggingface/datasets/issues/1744/events | https://github.com/huggingface/datasets/pull/1744 | 787,649,811 | MDExOlB1bGxSZXF1ZXN0NTU2MzA0MjU4 | 1,744 | Add missing "brief" entries to reuters | [] | closed | false | null | 2 | 2021-01-17T07:58:49Z | 2021-01-18T11:26:09Z | 2021-01-18T11:26:09Z | null | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1744/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1744.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1744",
"merged_at": "2021-01-18T11:26:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1744.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1744"
} | true | [
"@lhoestq I ran `make style` but CI code quality still failing and I don't have access to logs",
"It's also likely that due to the previous placement of the field initialization, much of the data about topics etc was simply wrong and carried over from previous entries. Model scores seem to improve significantly w... |
https://api.github.com/repos/huggingface/datasets/issues/4050 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4050/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4050/comments | https://api.github.com/repos/huggingface/datasets/issues/4050/events | https://github.com/huggingface/datasets/pull/4050 | 1,184,346,501 | PR_kwDODunzps41NAMF | 4,050 | Add RVL-CDIP dataset | [] | closed | false | null | 14 | 2022-03-29T06:00:02Z | 2022-04-22T09:55:07Z | 2022-04-21T17:15:41Z | null | Resolves #2762
Dataset Request : Add RVL-CDIP dataset [#2762](https://github.com/huggingface/datasets/issues/2762)
This PR adds the RVL-CDIP dataset.
The dataset contains Google Drive link for download and wasn't getting downloaded automatically, so I have provided manual_download_instructions.
- I have added the dummy_data.zip as well.
Needed inputs on how I can run the real data and the dummy data tests for datasets with manual download ?
Inputs and suggestions for improvement are welcome. Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4050/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4050/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4050.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4050",
"merged_at": "2022-04-21T17:15:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4050.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4050"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks a lot for inputs. I'll use the URL suggested and check.\r\n\r\n> we need to implement the streamable (can't use os.path.join) and the non-streamable versions of _generate_examples.\r\n\r\nSure. I will check the reference and ... |
https://api.github.com/repos/huggingface/datasets/issues/4033 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4033/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4033/comments | https://api.github.com/repos/huggingface/datasets/issues/4033/events | https://github.com/huggingface/datasets/pull/4033 | 1,182,984,445 | PR_kwDODunzps41Ie6w | 4,033 | Fix checksum error in cats_vs_dogs dataset | [] | closed | false | null | 1 | 2022-03-28T07:01:25Z | 2022-03-28T07:49:39Z | 2022-03-28T07:44:24Z | null | Recent PR updated the metadata JSON file of cats_vs_dogs dataset:
- #3878
However, that new JSON file contains a None checksum.
This PR fixes it.
Fix #4032. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4033/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4033/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4033.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4033",
"merged_at": "2022-03-28T07:44:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4033.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4033"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3262/comments | https://api.github.com/repos/huggingface/datasets/issues/3262/events | https://github.com/huggingface/datasets/pull/3262 | 1,052,455,082 | PR_kwDODunzps4uej4t | 3,262 | asserts replaced with exception for image classification task, csv, json | [] | closed | false | null | 0 | 2021-11-12T22:34:59Z | 2021-11-15T11:08:37Z | 2021-11-15T11:08:37Z | null | Fixes for csv, json in io module and image_classification task with tests referenced in https://github.com/huggingface/datasets/issues/3171 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3262/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3262",
"merged_at": "2021-11-15T11:08:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3262"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3892/comments | https://api.github.com/repos/huggingface/datasets/issues/3892/events | https://github.com/huggingface/datasets/pull/3892 | 1,166,227,003 | PR_kwDODunzps40ShYB | 3,892 | Fix CLI test checksums | [] | closed | false | null | 4 | 2022-03-11T10:04:04Z | 2022-03-15T12:28:24Z | 2022-03-15T12:28:23Z | null | Previous PR:
- #3796
introduced a side effect: `datasets-cli test` generates `dataset_infos.json` with `None` checksum values.
See:
- #3805
This PR introduces a way for `datasets-cli test` to force to record infos, even if `verify_infos=False`
Close #3848.
CC: @craffel | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3892/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3892.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3892",
"merged_at": "2022-03-15T12:28:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3892.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3892"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3892). All of your documentation changes will be reflected on that endpoint.",
"Feel free to merge if it's good for you :)",
"I've added a test @lhoestq. Once all green, I'll merge. ",
"Last failing tests do not have nothin... |
https://api.github.com/repos/huggingface/datasets/issues/4447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4447/comments | https://api.github.com/repos/huggingface/datasets/issues/4447/events | https://github.com/huggingface/datasets/pull/4447 | 1,260,041,805 | PR_kwDODunzps45E4A- | 4,447 | Minor fixes/improvements in `scene_parse_150` card | [] | closed | false | null | 1 | 2022-06-03T15:22:34Z | 2022-06-06T15:50:25Z | 2022-06-06T15:41:37Z | null | Add `paperswithcode_id` and fix some links in the `scene_parse_150` card. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4447",
"merged_at": "2022-06-06T15:41:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4447"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1565 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1565/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1565/comments | https://api.github.com/repos/huggingface/datasets/issues/1565/events | https://github.com/huggingface/datasets/pull/1565 | 766,333,940 | MDExOlB1bGxSZXF1ZXN0NTM5Mzg2MzEx | 1,565 | Create README.md | [] | closed | false | null | 5 | 2020-12-14T11:40:23Z | 2021-03-25T14:01:49Z | 2021-03-25T14:01:49Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1565/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1565/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1565.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1565",
"merged_at": "2021-03-25T14:01:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1565.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1565"
} | true | [
"@ManuelFay thanks you so much for adding a dataset card, this is such a cool contribution!\r\n\r\nThis looks like it uses an old template for the card we've moved things around a bit and we have an app you should be using to get the tags and the structure of the Data Fields paragraph :) Would you mind moving your ... | |
https://api.github.com/repos/huggingface/datasets/issues/2951 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2951/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2951/comments | https://api.github.com/repos/huggingface/datasets/issues/2951/events | https://github.com/huggingface/datasets/pull/2951 | 1,001,267,888 | PR_kwDODunzps4r-lGs | 2,951 | Dummy labels no longer on by default in `to_tf_dataset` | [] | closed | false | null | 2 | 2021-09-20T18:26:59Z | 2021-09-21T14:00:57Z | 2021-09-21T10:14:32Z | null | After more experimentation, I think I have a way to do things that doesn't depend on adding `dummy_labels` - they were quite a hacky solution anyway! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2951/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2951/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2951.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2951",
"merged_at": "2021-09-21T10:14:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2951.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2951"
} | true | [
"@lhoestq Let me make sure we never need it, and if not then I'll remove it entirely in a follow-up PR.",
"Thanks ;) it will be less confusing and easier to maintain to not keep unused hacky features"
] |
https://api.github.com/repos/huggingface/datasets/issues/3087 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3087/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3087/comments | https://api.github.com/repos/huggingface/datasets/issues/3087/events | https://github.com/huggingface/datasets/issues/3087 | 1,026,780,469 | I_kwDODunzps49M201 | 3,087 | Removing label column in a text classification dataset yields to errors | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-10-14T20:12:50Z | 2021-10-15T10:11:04Z | 2021-10-15T10:11:04Z | null | ## Describe the bug
This looks like #3059 but it's not linked to the cache this time. Removing the `label` column from a text classification dataset and then performing any processing will result in an error.
To reproduce:
```py
from datasets import load_dataset
from transformers import AutoTokenizer
raw_datasets = load_dataset("imdb")
raw_datasets = raw_datasets.remove_columns("label")
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
context_length = 128
def tokenize_pad_and_truncate(texts):
return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
```
Traceback:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-ba61bb32f786> in <module>
12 return tokenizer(texts["text"], truncation=True, padding="max_length", max_length=context_length)
13
---> 14 tokenized_datasets = raw_datasets.map(tokenize_pad_and_truncate, batched=True)
~/git/datasets/src/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/dataset_dict.py in <dictcomp>(.0)
500 desc=desc,
501 )
--> 502 for k, dataset in self.items()
503 }
504 )
~/git/datasets/src/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2051 new_fingerprint=new_fingerprint,
2052 disable_tqdm=disable_tqdm,
-> 2053 desc=desc,
2054 )
2055 else:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
501 self: "Dataset" = kwargs.pop("self")
502 # apply actual function
--> 503 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
504 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
505 for dataset in datasets:
~/git/datasets/src/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
468 }
469 # apply actual function
--> 470 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
471 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
472 # re-apply format to the output
~/git/datasets/src/datasets/fingerprint.py in wrapper(*args, **kwargs)
404 # Call actual function
405
--> 406 out = func(self, *args, **kwargs)
407
408 # Update fingerprint of in-place transforms + update in-place history of transforms
~/git/datasets/src/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2243 if os.path.exists(cache_file_name) and load_from_cache_file:
2244 logger.warning("Loading cached processed dataset at %s", cache_file_name)
-> 2245 info = self.info.copy()
2246 info.features = features
2247 info.task_templates = None
~/git/datasets/src/datasets/info.py in copy(self)
278
279 def copy(self) -> "DatasetInfo":
--> 280 return self.__class__(**{k: copy.deepcopy(v) for k, v in self.__dict__.items()})
281
282
~/git/datasets/src/datasets/info.py in __init__(self, description, citation, homepage, license, features, post_processed, supervised_keys, task_templates, builder_name, config_name, version, splits, download_checksums, download_size, post_processing_size, dataset_size, size_in_bytes)
~/git/datasets/src/datasets/info.py in __post_init__(self)
177 for idx, template in enumerate(self.task_templates):
178 if isinstance(template, TextClassification):
--> 179 labels = self.features[template.label_column].names
180 self.task_templates[idx] = TextClassification(
181 text_column=template.text_column, label_column=template.label_column, labels=labels
KeyError: 'label'
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3087/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3087/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/404 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/404/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/404/comments | https://api.github.com/repos/huggingface/datasets/issues/404/events | https://github.com/huggingface/datasets/pull/404 | 658,400,987 | MDExOlB1bGxSZXF1ZXN0NDUwMzY4Mjg4 | 404 | Add seed in metrics | [] | closed | false | null | 0 | 2020-07-16T17:27:05Z | 2020-07-20T10:12:35Z | 2020-07-20T10:12:34Z | null | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused.
However, instantiating twice a metric (two different experiments) without specifying a seed can create different results. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/404/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/404/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"merged_at": "2020-07-20T10:12:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/813/comments | https://api.github.com/repos/huggingface/datasets/issues/813/events | https://github.com/huggingface/datasets/issues/813 | 738,489,852 | MDU6SXNzdWU3Mzg0ODk4NTI= | 813 | How to implement DistributedSampler with datasets | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 4 | 2020-11-08T15:27:11Z | 2022-10-05T12:54:23Z | 2022-10-05T12:54:23Z | null | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using datasets in which datasets are iterative? To give you more context, I have multiple of datasets and I need to write sampler for this case. thanks. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/813/timeline | null | completed | null | null | false | [
"Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ",
"Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to g... |
https://api.github.com/repos/huggingface/datasets/issues/5340 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5340/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5340/comments | https://api.github.com/repos/huggingface/datasets/issues/5340/events | https://github.com/huggingface/datasets/pull/5340 | 1,483,182,158 | PR_kwDODunzps5EtWo3 | 5,340 | Clean up DatasetInfo and Dataset docstrings | [] | closed | false | null | 1 | 2022-12-08T00:17:53Z | 2022-12-08T19:33:14Z | 2022-12-08T19:30:10Z | null | This PR cleans up the docstrings for `DatasetInfo` and about half of the methods in `Dataset`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5340/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5340/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5340.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5340",
"merged_at": "2022-12-08T19:30:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5340.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5340"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2861/comments | https://api.github.com/repos/huggingface/datasets/issues/2861/events | https://github.com/huggingface/datasets/pull/2861 | 985,081,871 | MDExOlB1bGxSZXF1ZXN0NzI0NDM2OTcw | 2,861 | fix: 🐛 be more specific when catching exceptions | [] | closed | false | null | 6 | 2021-09-01T12:18:12Z | 2021-09-02T09:53:36Z | 2021-09-02T09:52:03Z | null | The same specific exception is catched in other parts of the same
function. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2861/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2861.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2861",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2861.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2861"
} | true | [
"To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n",
"Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, wh... |
https://api.github.com/repos/huggingface/datasets/issues/2040 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2040/comments | https://api.github.com/repos/huggingface/datasets/issues/2040/events | https://github.com/huggingface/datasets/issues/2040 | 830,169,387 | MDU6SXNzdWU4MzAxNjkzODc= | 2,040 | ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | [] | closed | false | null | 4 | 2021-03-12T14:27:00Z | 2021-08-04T18:00:43Z | 2021-08-04T18:00:43Z | null | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yielding the following error:
```python
ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk.
However datasets' indices [1] come from memory and datasets' indices [0] come from disk.
```
Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho...
`load_from_disk(PATH_DATA_CLS_A)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 785
})
```
`load_from_disk(PATH_DATA_CLS_B)['train']` yields:
```python
Dataset({
features: ['labels', 'text'],
num_rows: 3341
})
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2040/timeline | null | completed | null | null | false | [
"Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no... |
https://api.github.com/repos/huggingface/datasets/issues/3838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3838/comments | https://api.github.com/repos/huggingface/datasets/issues/3838/events | https://github.com/huggingface/datasets/issues/3838 | 1,161,137,406 | I_kwDODunzps5FNYz- | 3,838 | Add a data type for labeled images (image segmentation) | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2022-03-07T09:38:15Z | 2022-04-10T13:34:59Z | null | null | It might be a mix of Image and ClassLabel, and the color palette might be generated automatically.
---
### Example
every pixel in the images of the annotation column (in https://huggingface.co/datasets/scene_parse_150) has a value that gives its class, and the dataset itself is associated with a color palette (eg https://github.com/open-mmlab/mmsegmentation/blob/98a353b674c6052d319e7de4e5bcd65d670fcf84/mmseg/datasets/ade.py#L47) that maps every class with a color.
So we might want to render the image as a colored image instead of a black and white one.
<img width="785" alt="156741519-fbae6844-2606-4c28-837e-279d83d00865" src="https://user-images.githubusercontent.com/1676121/157005263-7058c584-2b70-465a-ad94-8a982f726cf4.png">
See https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/features/labeled_image.py for reference in Tensorflow | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3838/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3838/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4897/comments | https://api.github.com/repos/huggingface/datasets/issues/4897/events | https://github.com/huggingface/datasets/issues/4897 | 1,351,784,727 | I_kwDODunzps5QkpkX | 4,897 | datasets generate large arrow file | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-08-26T05:51:16Z | 2022-09-18T05:07:52Z | 2022-09-18T05:07:52Z | null | Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4897/timeline | null | completed | null | null | false | [
"Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?",
"@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 time... |
https://api.github.com/repos/huggingface/datasets/issues/897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/897/comments | https://api.github.com/repos/huggingface/datasets/issues/897/events | https://github.com/huggingface/datasets/issues/897 | 752,100,256 | MDU6SXNzdWU3NTIxMDAyNTY= | 897 | Dataset viewer issues | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 5 | 2020-11-27T09:14:34Z | 2021-10-31T09:12:01Z | 2021-10-31T09:12:01Z | null | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. This is probably expected, but perhaps a better error message can be shown to the user
```bash
IndexError: list index out of range
Traceback:
File "/home/sasha/streamlit/lib/streamlit/ScriptRunner.py", line 322, in _run_script
exec(code, module.__dict__)
File "/home/sasha/nlp-viewer/run.py", line 316, in <module>
st.table(style)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 122, in wrapped_method
return dg._enqueue_new_element_delta(marshall_element, delta_type, last_index)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 367, in _enqueue_new_element_delta
rv = marshall_element(msg.delta.new_element)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 120, in marshall_element
return method(dg, element, *args, **kwargs)
File "/home/sasha/streamlit/lib/streamlit/DeltaGenerator.py", line 2944, in table
data_frame_proto.marshall_data_frame(data, element.table)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 54, in marshall_data_frame
_marshall_styles(proto_df.style, df, styler)
File "/home/sasha/streamlit/lib/streamlit/elements/data_frame_proto.py", line 73, in _marshall_styles
translated_style = styler._translate()
File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/pandas/io/formats/style.py", line 351, in _translate
* (len(clabels[0]) - len(hidden_columns))
```
- there seems to be **an encoding issue** in the default view, the dataset examples are shown as raw monospace text, without a decent encoding. That makes it hard to read for languages that use a lot of special characters. Take for instance the [cs-en WMT19 set](https://huggingface.co/nlp/viewer/?dataset=wmt19&config=cs-en). This problem goes away when you enable "List view", because then some syntax highlighteris used, and the special characters are coded correctly.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/897/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?",
"Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewe... |
https://api.github.com/repos/huggingface/datasets/issues/53 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/53/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/53/comments | https://api.github.com/repos/huggingface/datasets/issues/53/events | https://github.com/huggingface/datasets/pull/53 | 613,436,158 | MDExOlB1bGxSZXF1ZXN0NDE0MTkwMzkz | 53 | [Features] Typo in generate_from_dict | [] | closed | false | null | 0 | 2020-05-06T16:05:23Z | 2020-05-07T15:28:46Z | 2020-05-07T15:28:45Z | null | Change `isinstance` test in features when generating features from dict. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/53/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/53/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/53.diff",
"html_url": "https://github.com/huggingface/datasets/pull/53",
"merged_at": "2020-05-07T15:28:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/53.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/53"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1653/comments | https://api.github.com/repos/huggingface/datasets/issues/1653/events | https://github.com/huggingface/datasets/pull/1653 | 775,632,945 | MDExOlB1bGxSZXF1ZXN0NTQ2Mjc0Njc0 | 1,653 | harem dataset: add data splits info | [] | closed | false | null | 0 | 2020-12-28T23:58:20Z | 2020-12-30T16:49:03Z | 2020-12-30T16:49:03Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1653/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1653/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1653.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1653",
"merged_at": "2020-12-30T16:49:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1653.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1653"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/999 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/999/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/999/comments | https://api.github.com/repos/huggingface/datasets/issues/999/events | https://github.com/huggingface/datasets/pull/999 | 755,246,786 | MDExOlB1bGxSZXF1ZXN0NTMwOTk1MTY3 | 999 | add generated_reviews_enth | [] | closed | false | null | 0 | 2020-12-02T12:50:43Z | 2020-12-03T11:17:28Z | 2020-12-03T11:17:28Z | null | `generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1909.05858), translated by Google Translate API and annotated as accepted or rejected (`correct`) based on fluency and adequacy of the translation by human annotators. This allows it to be used for English-to-Thai translation quality esitmation (binary label), machine translation, and sentiment analysis. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/999/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/999/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/999.diff",
"html_url": "https://github.com/huggingface/datasets/pull/999",
"merged_at": "2020-12-03T11:17:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/999.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/999"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5426 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5426/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5426/comments | https://api.github.com/repos/huggingface/datasets/issues/5426/events | https://github.com/huggingface/datasets/issues/5426 | 1,535,158,555 | I_kwDODunzps5bgKkb | 5,426 | CI tests are broken: SchemaInferenceError | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2023-01-16T16:02:07Z | 2023-06-02T06:40:32Z | 2023-01-16T16:49:04Z | null | CI test (unit, ubuntu-latest, deps-minimum) is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
```
Stack trace:
```
______________ BeamBuilderTest.test_download_and_prepare_sharded _______________
[gw1] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python
self = <tests.test_beam.BeamBuilderTest testMethod=test_download_and_prepare_sharded>
@require_beam
def test_download_and_prepare_sharded(self):
import apache_beam as beam
original_write_parquet = beam.io.parquetio.WriteToParquet
expected_num_examples = len(get_test_dummy_examples())
with tempfile.TemporaryDirectory() as tmp_cache_dir:
builder = DummyBeamDataset(cache_dir=tmp_cache_dir, beam_runner="DirectRunner")
with patch("apache_beam.io.parquetio.WriteToParquet") as write_parquet_mock:
write_parquet_mock.side_effect = partial(original_write_parquet, num_shards=2)
> builder.download_and_prepare()
tests/test_beam.py:97:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:864: in download_and_prepare
**download_and_prepare_kwargs,
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/builder.py:1976: in _download_and_prepare
num_examples, num_bytes = beam_writer.finalize(metrics.query(m_filter))
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:694: in finalize
shard_num_bytes, _ = parquet_to_arrow(source, destination)
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:740: in parquet_to_arrow
num_bytes, num_examples = writer.finalize()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <datasets.arrow_writer.ArrowWriter object at 0x7f6dcbb3e810>
close_stream = True
def finalize(self, close_stream=True):
self.write_rows_on_file()
# In case current_examples < writer_batch_size, but user uses finalize()
if self._check_duplicates:
self.check_duplicate_keys()
# Re-intializing to empty list for next batch
self.hkey_record = []
self.write_examples_on_file()
# If schema is known, infer features even if no examples were written
if self.pa_writer is None and self.schema:
self._build_writer(self.schema)
if self.pa_writer is not None:
self.pa_writer.close()
self.pa_writer = None
if close_stream:
self.stream.close()
else:
if close_stream:
self.stream.close()
> raise SchemaInferenceError("Please pass `features` or at least one example when writing data")
E datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writing data
/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/site-packages/datasets/arrow_writer.py:593: SchemaInferenceError
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5426/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5426/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5528/comments | https://api.github.com/repos/huggingface/datasets/issues/5528/events | https://github.com/huggingface/datasets/pull/5528 | 1,582,195,085 | PR_kwDODunzps5J13wC | 5,528 | Push to hub in a pull request | [] | open | false | null | 10 | 2023-02-13T11:43:47Z | 2023-03-21T14:32:12Z | null | null | Fixes #5492.
Introduce new kwarg `create_pr` in `push_to_hub`, which is passed to `HFapi.upload_file`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5528/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5528.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5528",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5528.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5528"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5528). All of your documentation changes will be reflected on that endpoint.",
"It seems that the parameter `create_pr` is available for [`0.8.0`](https://huggingface.co/docs/huggingface_hub/v0.8.1/en/package_reference/hf_api#h... |
https://api.github.com/repos/huggingface/datasets/issues/3978 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3978/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3978/comments | https://api.github.com/repos/huggingface/datasets/issues/3978/events | https://github.com/huggingface/datasets/issues/3978 | 1,175,226,456 | I_kwDODunzps5GDIhY | 3,978 | I can't view HFcallback dataset for ASR Space | [] | open | false | null | 3 | 2022-03-21T11:07:49Z | 2022-04-04T13:34:38Z | null | null | ## Dataset viewer issue for '*Urdu-ASR-flags*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/kingabzpro/Urdu-ASR-flags)*
*I think dataset should show some thing and if you want me to add script, please show me the documentation. I thought this was suppose to be automatic task.*
Am I the one who added this dataset ? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3978/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3978/timeline | null | null | null | null | false | [
"the dataset viewer is working on this dataset. I imagine the issue is that we would expect to be able to listen to the audio files in the `Please Record Your Voice file` column, right?\r\n\r\nmaybe @lhoestq or @albertvillanova could help\r\n\r\n<img width=\"1019\" alt=\"Capture d’écran 2022-03-24 à 17 36 20\" sr... |
https://api.github.com/repos/huggingface/datasets/issues/754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/754/comments | https://api.github.com/repos/huggingface/datasets/issues/754/events | https://github.com/huggingface/datasets/pull/754 | 727,863,105 | MDExOlB1bGxSZXF1ZXN0NTA4NjczNzM2 | 754 | Use full released xsum dataset | [] | closed | false | null | 3 | 2020-10-23T03:29:49Z | 2021-01-01T03:11:56Z | 2020-10-26T12:56:58Z | null | #672 Fix xsum to expand coverage and include IDs
Code based on parser from older version of `datasets/xsum/xsum.py`
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/754/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/754",
"merged_at": "2020-10-26T12:56:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/754"
} | true | [
"@lhoestq I took a shot at addressing your comments but the build scripts seem to be complaining about not being able to open dummy files. How do I resolve those errors without copying the full dataset into the dummy dir?",
"Could you check that the names of the dummy data files are right ?\r\nYou can use \r\n```... |
https://api.github.com/repos/huggingface/datasets/issues/3137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3137/comments | https://api.github.com/repos/huggingface/datasets/issues/3137/events | https://github.com/huggingface/datasets/pull/3137 | 1,033,363,652 | PR_kwDODunzps4tievk | 3,137 | Fix numpy deprecation warning for ragged tensors | [] | closed | false | null | 1 | 2021-10-22T09:17:46Z | 2021-10-22T16:04:15Z | 2021-10-22T16:04:14Z | null | Numpy shows a deprecation warning when we call `np.array` on a list of ragged tensors without specifying the `dtype`. If their shapes match, the tensors can be collated together, otherwise the resulting array should have `dtype=np.object`.
Fix #3084
cc @Rocketknight1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3137/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3137/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3137.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3137",
"merged_at": "2021-10-22T16:04:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3137.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3137"
} | true | [
"This'll be a really helpful fix, thank you!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4784 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4784/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4784/comments | https://api.github.com/repos/huggingface/datasets/issues/4784/events | https://github.com/huggingface/datasets/issues/4784 | 1,326,395,280 | I_kwDODunzps5PDy-Q | 4,784 | Add Multiface dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | 3 | 2022-08-02T21:00:22Z | 2022-08-08T14:42:36Z | null | null | ## Adding a Dataset
- **Name:** Multiface dataset
- **Description:** f high quality recordings of the faces of 13 identities, each captured in a multi-view capture stage performing various facial expressions. An average of 12,200 (v1 scripts) to 23,000 (v2 scripts) frames per subject with capture rate at 30 fps
- **Data:** https://github.com/facebookresearch/multiface
The whole dataset is 65TB though, so I'm not sure
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4784/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4784/timeline | null | null | null | null | false | [
"Hi @osanseviero I would like to add this dataset.",
"Hey @nandwalritik! Thanks for offering to help!\r\n\r\nThis dataset might be somewhat complex and I'm concerned about it being 65 TB, which would be quite expensive to host. @lhoestq @mariosasko I would love your input if you think it's worth adding this datas... |
https://api.github.com/repos/huggingface/datasets/issues/2234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2234/comments | https://api.github.com/repos/huggingface/datasets/issues/2234/events | https://github.com/huggingface/datasets/pull/2234 | 860,442,246 | MDExOlB1bGxSZXF1ZXN0NjE3MzI4NDU3 | 2,234 | Fix bash snippet formatting in ADD_NEW_DATASET.md | [] | closed | false | null | 0 | 2021-04-17T16:01:08Z | 2021-04-19T10:57:31Z | 2021-04-19T07:51:36Z | null | This PR indents the paragraphs around the bash snippets in ADD_NEW_DATASET.md to fix formatting. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2234/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2234",
"merged_at": "2021-04-19T07:51:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2234"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3937 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3937/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3937/comments | https://api.github.com/repos/huggingface/datasets/issues/3937/events | https://github.com/huggingface/datasets/issues/3937 | 1,170,832,006 | I_kwDODunzps5FyXqG | 3,937 | Missing languages in lvwerra/github-code dataset | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | 5 | 2022-03-16T10:32:03Z | 2022-03-22T07:09:23Z | 2022-03-21T14:50:47Z | null | Hi,
I'm working with the github-code dataset. First of all, thank you for creating this amazing dataset!
I've noticed that two languages are missing from the dataset: TypeScript and Scala.
Looks like they're also omitted from the query you used to get the original code.
Are there any plans to add them in the future?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3937/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3937/timeline | null | completed | null | null | false | [
"Thanks for contacting @Eytan-S.\r\n\r\nI think @lvwerra could better answer this. ",
"That seems to be an oversight - I originally planned to include them in the dataset and for some reason they were in the list of languages but not in the query. Since there is an issue with the deduplication step I'll rerun the... |
https://api.github.com/repos/huggingface/datasets/issues/706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/706/comments | https://api.github.com/repos/huggingface/datasets/issues/706/events | https://github.com/huggingface/datasets/pull/706 | 713,721,959 | MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0 | 706 | Fix config creation for data files with NamedSplit | [] | closed | false | null | 0 | 2020-10-02T15:46:49Z | 2020-10-05T08:15:00Z | 2020-10-05T08:14:59Z | null | During config creation, we need to iterate through the data files of all the splits to compute a hash.
To make sure the hash is unique given a certain combination of files/splits, we sort the split names.
However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort the string of their names instead.
Fix #705 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/706/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/706/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/706.diff",
"html_url": "https://github.com/huggingface/datasets/pull/706",
"merged_at": "2020-10-05T08:14:59Z",
"patch_url": "https://github.com/huggingface/datasets/pull/706.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/706"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4667/comments | https://api.github.com/repos/huggingface/datasets/issues/4667/events | https://github.com/huggingface/datasets/issues/4667 | 1,299,735,703 | I_kwDODunzps5NeGSX | 4,667 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
}
] | closed | false | null | 0 | 2022-07-09T18:03:15Z | 2022-07-11T07:47:15Z | 2022-07-11T07:47:15Z | null | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4667/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4667/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1147/comments | https://api.github.com/repos/huggingface/datasets/issues/1147/events | https://github.com/huggingface/datasets/pull/1147 | 757,502,199 | MDExOlB1bGxSZXF1ZXN0NTMyODY4MzU2 | 1,147 | Vinay/add/telugu books | [] | closed | false | null | 0 | 2020-12-05T01:17:02Z | 2020-12-05T16:36:04Z | 2020-12-05T16:36:04Z | null | Real data tests are failing as this dataset needs to be manually downloaded | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1147/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1147/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1147.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1147",
"merged_at": "2020-12-05T16:36:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1147.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1147"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3604/comments | https://api.github.com/repos/huggingface/datasets/issues/3604/events | https://github.com/huggingface/datasets/issues/3604 | 1,108,477,316 | I_kwDODunzps5CEgWE | 3,604 | Dataset Viewer not showing Previews for Private Datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "E5583E",
"default": fals... | closed | false | null | 2 | 2022-01-19T19:29:26Z | 2022-09-26T08:04:43Z | 2022-09-26T08:04:43Z | null | ## Dataset viewer issue for 'abidlabs/test-audio-13'
It seems that the dataset viewer does not show previews for `private` datasets, even for the user who's private dataset it is. See [1] for example. If I change the visibility to public, then it does show, but it would be useful to have the viewer even for private datasets.

**Link:**
[1] https://huggingface.co/datasets/abidlabs/test-audio-13
**Am I the one who added this dataset?**
Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3604/timeline | null | completed | null | null | false | [
"Sure, it's on the roadmap.",
"Closing in favor of https://github.com/huggingface/datasets-server/issues/39."
] |
https://api.github.com/repos/huggingface/datasets/issues/5914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5914/comments | https://api.github.com/repos/huggingface/datasets/issues/5914/events | https://github.com/huggingface/datasets/issues/5914 | 1,731,483,996 | I_kwDODunzps5nNFlc | 5,914 | array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size in Datasets | [] | open | false | null | 0 | 2023-05-30T04:25:00Z | 2023-05-30T04:25:00Z | null | null | ### Describe the bug
When using the `filter` or `map` function to preprocess a dataset, a ValueError is encountered with the error message "array is too big; arr.size * arr.dtype.itemsize is larger than the maximum possible size."
Detailed error message:
Traceback (most recent call last):
File "data_processing.py", line 26, in <module>
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split],writer_batch_size = 50)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2405, in map
desc=desc,
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2756, in _map_single
example = apply_function_on_filtered_inputs(example, i, offset=offset)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "data_processing.py", line 11, in prepare_dataset
audio = batch["audio"]
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 123, in __getitem__
value = decode_nested_example(self.features[key], value) if value is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/features.py", line 1260, in decode_nested_example
return schema.decode_example(obj, token_per_repo_id=token_per_repo_id) if obj is not None else None
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 156, in decode_example
array, sampling_rate = self._decode_non_mp3_path_like(path, token_per_repo_id=token_per_repo_id)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/datasets/features/audio.py", line 257, in _decode_non_mp3_path_like
array, sampling_rate = librosa.load(f, sr=self.sampling_rate, mono=self.mono)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 176, in load
y, sr_native = __soundfile_load(path, offset, duration, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/librosa/core/audio.py", line 222, in __soundfile_load
y = sf_desc.read(frames=frame_duration, dtype=dtype, always_2d=False).T
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 891, in read
out = self._create_empty_array(frames, always_2d, dtype)
File "/projects/zhwa3087/software/anaconda/envs/mycustomenv/lib/python3.7/site-packages/soundfile.py", line 1323, in _create_empty_array
return np.empty(shape, dtype, order='C')
ValueError: array is too big; `arr.size * arr.dtype.itemsize` is larger than the maximum possible size.
### Steps to reproduce the bug
```python
from datasets import load_dataset, DatasetDict
from transformers import WhisperFeatureExtractor
from transformers import WhisperTokenizer
samromur_children= load_dataset("language-and-voice-lab/samromur_children")
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="icelandic", task="transcribe")
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=16000).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["normalized_text"]).input_ids
return batch
cache_dict = {"train": "./cache/audio_train.cache", \
"validation": "./cache/audio_validation.cache", \
"test": "./cache/audio_test.cache"}
filter_cache_dict = {"train": "./cache/filter_train.arrow", \
"validation": "./cache/filter_validation.arrow", \
"test": "./cache/filter_test.arrow"}
print("before filtering")
print(samromur_children)
#filter the dataset to only include examples with more than 2 seconds of audio
samromur_children = samromur_children.filter(lambda example: example["audio"]["array"].shape[0] > 16000*2, cache_file_names=filter_cache_dict)
print("after filtering")
print(samromur_children)
processed_dataset = DatasetDict()
# processed_dataset = samromur_children.map(prepare_dataset, cache_file_names=cache_dict, num_proc=10,)
for split in ["train", "validation", "test"]:
processed_dataset[split] = samromur_children[split].map(prepare_dataset, cache_file_name=cache_dict[split])
```
### Expected behavior
The dataset is successfully processed and ready to train the model.
### Environment info
Python version: 3.7.13
datasets package version: 2.4.0
librosa package version: 0.10.0.post2 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5914/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5914/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4390 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4390/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4390/comments | https://api.github.com/repos/huggingface/datasets/issues/4390/events | https://github.com/huggingface/datasets/pull/4390 | 1,244,835,877 | PR_kwDODunzps44RoXs | 4,390 | Fix metadata validation | [] | closed | false | null | 1 | 2022-05-23T09:11:20Z | 2022-06-01T09:27:52Z | 2022-06-01T09:19:25Z | null | Since Python 3.8, the typing module:
- raises an AttributeError when trying to access `__args__` on any type, e.g.: `List.__args__`
- provides the `get_args` function instead: `get_args(List)`
This PR implements a fix for Python >=3.8 whereas maintaining backward compatibility. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4390/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4390/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4390.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4390",
"merged_at": "2022-06-01T09:19:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4390.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4390"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/events | https://github.com/huggingface/datasets/issues/1894 | 809,609,654 | MDU6SXNzdWU4MDk2MDk2NTQ= | 1,894 | benchmarking against MMapIndexedDataset | [] | open | false | null | 3 | 2021-02-16T20:04:58Z | 2021-02-17T18:52:28Z | null | null | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens).
Questions:
1) Is this (basically identical) performance expected?
2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?)
3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks?
Thanks in advance! Sam | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1894/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | null | null | null | false | [
"Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for read... |
https://api.github.com/repos/huggingface/datasets/issues/5171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5171/comments | https://api.github.com/repos/huggingface/datasets/issues/5171/events | https://github.com/huggingface/datasets/pull/5171 | 1,425,355,111 | PR_kwDODunzps5BpsXf | 5,171 | Add PB and TB in convert_file_size_to_int | [] | closed | false | null | 1 | 2022-10-27T09:50:31Z | 2022-10-27T12:14:27Z | 2022-10-27T12:12:30Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5171/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5171/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5171.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5171",
"merged_at": "2022-10-27T12:12:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5171.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5171"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3597 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3597/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3597/comments | https://api.github.com/repos/huggingface/datasets/issues/3597/events | https://github.com/huggingface/datasets/issues/3597 | 1,108,092,864 | I_kwDODunzps5CDCfA | 3,597 | ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-01-19T13:19:28Z | 2022-08-05T12:35:51Z | 2022-02-14T08:46:34Z | null | ## Bug
The install of streaming dataset is giving following error.
## Steps to reproduce the bug
```python
! git clone https://github.com/huggingface/datasets.git
! cd datasets
! pip install -e ".[streaming]"
```
## Actual results
Cloning into 'datasets'...
remote: Enumerating objects: 50816, done.
remote: Counting objects: 100% (2356/2356), done.
remote: Compressing objects: 100% (1606/1606), done.
remote: Total 50816 (delta 834), reused 1741 (delta 525), pack-reused 48460
Receiving objects: 100% (50816/50816), 72.47 MiB | 27.68 MiB/s, done.
Resolving deltas: 100% (22541/22541), done.
Checking out files: 100% (6722/6722), done.
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3597/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3597/timeline | null | completed | null | null | false | [
"Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work:\r\n```\r\n!git clone https://github.com/huggingface/datasets.git\r\n%cd datasets\r\n!pip install -e \".[streaming]\"\r\n```",
"thanks @mariosasko i had the same mistake and your solution is what was needed"
] |
https://api.github.com/repos/huggingface/datasets/issues/1399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1399/comments | https://api.github.com/repos/huggingface/datasets/issues/1399/events | https://github.com/huggingface/datasets/pull/1399 | 760,499,576 | MDExOlB1bGxSZXF1ZXN0NTM1MzIwNzA2 | 1,399 | Add HoVer Dataset | [] | closed | false | null | 2 | 2020-12-09T16:55:39Z | 2020-12-14T10:57:23Z | 2020-12-14T10:57:22Z | null | HoVer: A Dataset for Many-Hop Fact Extraction And Claim Verification
https://arxiv.org/abs/2011.03088 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1399/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1399/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1399.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1399",
"merged_at": "2020-12-14T10:57:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1399.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1399"
} | true | [
"@lhoestq all comments addressed :) ",
"merging since the CI is fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/3691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3691/comments | https://api.github.com/repos/huggingface/datasets/issues/3691/events | https://github.com/huggingface/datasets/pull/3691 | 1,127,629,306 | PR_kwDODunzps4yQThV | 3,691 | Upgrade black to version ~=22.0 | [] | closed | false | null | 0 | 2022-02-08T18:45:19Z | 2022-02-08T19:56:40Z | 2022-02-08T19:56:39Z | null | Upgrades the `datasets` library quality tool `black` to use the first stable release of `black`, version 22.0. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3691/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3691/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3691.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3691",
"merged_at": "2022-02-08T19:56:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3691.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3691"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/59 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/59/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/59/comments | https://api.github.com/repos/huggingface/datasets/issues/59/events | https://github.com/huggingface/datasets/pull/59 | 614,366,045 | MDExOlB1bGxSZXF1ZXN0NDE0OTM3NTgx | 59 | Fix tests | [] | closed | false | null | 5 | 2020-05-07T21:48:09Z | 2020-05-08T10:57:57Z | 2020-05-08T10:46:51Z | null | @patrickvonplaten I've broken a bit the tests with #25 while simplifying and re-organizing the `load.py` and `download_manager.py` scripts.
I'm trying to fix them here but I have a weird error, do you think you can have a look?
```bash
(datasets) MacBook-Pro-de-Thomas:datasets thomwolf$ python -m pytest -sv ./tests/test_dataset_common.py::DatasetTest::test_builder_class_snli
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.7.7, pytest-5.4.1, py-1.8.1, pluggy-0.13.1 -- /Users/thomwolf/miniconda2/envs/datasets/bin/python
cachedir: .pytest_cache
rootdir: /Users/thomwolf/Documents/GitHub/datasets
plugins: xdist-1.31.0, forked-1.1.3
collected 1 item
tests/test_dataset_common.py::DatasetTest::test_builder_class_snli ERROR
=================================================================================== ERRORS ====================================================================================
____________________________________________________________ ERROR at setup of DatasetTest.test_builder_class_snli ____________________________________________________________
file_path = <module 'tests.test_dataset_common' from '/Users/thomwolf/Documents/GitHub/datasets/tests/test_dataset_common.py'>
download_config = DownloadConfig(cache_dir=None, force_download=False, resume_download=False, local_files_only=False, proxies=None, user_agent=None, extract_compressed_file=True, force_extract=True)
download_kwargs = {}
def setup_module(file_path: str, download_config: Optional[DownloadConfig] = None, **download_kwargs,) -> DatasetBuilder:
r"""
Download/extract/cache a dataset to add to the lib from a path or url which can be:
- a path to a local directory containing the dataset processing python script
- an url to a S3 directory with a dataset processing python script
Dataset codes are cached inside the lib to allow easy import (avoid ugly sys.path tweaks)
and using cloudpickle (among other things).
Return: tuple of
the unique id associated to the dataset
the local path to the dataset
"""
if download_config is None:
download_config = DownloadConfig(**download_kwargs)
download_config.extract_compressed_file = True
download_config.force_extract = True
> name = list(filter(lambda x: x, file_path.split("/")))[-1] + ".py"
E AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
src/nlp/load.py:169: AttributeError
============================================================================== warnings summary ===============================================================================
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15
/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py:15: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/latest/warnings.html
=========================================================================== short test summary info ===========================================================================
ERROR tests/test_dataset_common.py::DatasetTest::test_builder_class_snli - AttributeError: module 'tests.test_dataset_common' has no attribute 'split'
========================================================================= 1 warning, 1 error in 3.63s =========================================================================
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/59/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/59/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/59.diff",
"html_url": "https://github.com/huggingface/datasets/pull/59",
"merged_at": "2020-05-08T10:46:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/59.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/59"
} | true | [
"I can fix the tests tomorrow :-) ",
"Very weird bug indeed! I think the problem was that when importing `setup_module` we overwrote `pytest's` setup_module function. I think this is the relevant code in pytest: https://github.com/pytest-dev/pytest/blob/9d2eabb397b059b75b746259daeb20ee5588f559/src/_pytest/python.... |
https://api.github.com/repos/huggingface/datasets/issues/2718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2718/comments | https://api.github.com/repos/huggingface/datasets/issues/2718/events | https://github.com/huggingface/datasets/pull/2718 | 953,360,663 | MDExOlB1bGxSZXF1ZXN0Njk3NDE0NTQy | 2,718 | New documentation structure | [] | closed | false | null | 5 | 2021-07-26T23:15:13Z | 2021-09-13T17:20:53Z | 2021-09-13T17:20:52Z | null | Organize Datasets documentation into four documentation types to improve clarity and discoverability of content.
**Content to add in the very short term (feel free to add anything I'm missing):**
- A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also be great to talk about Datasets speed and performance here, and if you can share any benchmarking/tests you did, that would be awesome! Finally, a discussion about how memory-mapping frees the user from RAM constraints would be very helpful.
- Explain why you would want to disable or override verifications when loading a dataset.
- If possible, include a code sample of when the number of elements in the field of an output dictionary aren’t the same as the other fields in the output dictionary (taken from the [note](https://huggingface.co/docs/datasets/processing.html#augmenting-the-dataset) here). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2718/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2718",
"merged_at": "2021-09-13T17:20:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2718"
} | true | [
"I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)",
"I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in... |
https://api.github.com/repos/huggingface/datasets/issues/5025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5025/comments | https://api.github.com/repos/huggingface/datasets/issues/5025/events | https://github.com/huggingface/datasets/issues/5025 | 1,386,011,239 | I_kwDODunzps5SnNpn | 5,025 | Custom Json Dataset Throwing Error when batch is False | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-09-26T12:38:39Z | 2022-09-27T19:50:00Z | 2022-09-27T19:50:00Z | null | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
#For this reason I couldn't set the batch to True.
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
```
It throws below error.
```
/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py in __arrow_array__(self, type)
172 storage = to_pyarrow_listarray(data, pa_type)
--> 173 return pa.ExtensionArray.from_storage(pa_type, storage)
174
/opt/conda/lib/python3.7/site-packages/pyarrow/array.pxi in pyarrow.lib.ExtensionArray.from_storage()
TypeError: Incompatible storage type list<item: list<item: list<item: list<item: float>>>> for extension type extension<arrow.py_extension_type<Array3DExtensionType>>
```
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
rom datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here - it will load LayoutLMv3Processor behind the scenes,
# based on the checkpoint we provide from the hub
from datasets import load_dataset
def prepare_examples(examples):
#Some preporcessing for each image and text as all my data saved in cloud
encoding = processor(img_as_tensor, words, boxes=boxes, word_labels=labels,
truncation=True, padding="max_length")
# encoding['pixel_values']=np.array(encoding['pixel_values'])
return encoding
dataset = load_dataset("json", data_files='issues.jsonl')
processor = AutoProcessor.from_pretrained("microsoft/layoutlmv3-base", apply_ocr=False)
features = dataset["train"].features
column_names = dataset["train"].column_names
# we need to define custom features for `set_format` (used later on) to work properly
features = Features({
'pixel_values': Array3D(dtype="float32", shape=(3, 224, 224)),
'input_ids': Sequence(feature=Value(dtype='int64')),
'attention_mask': Sequence(Value(dtype='int64')),
'bbox': Array2D(dtype="int64", shape=(512, 4)),
'labels': Sequence(feature=Value(dtype='int64')),
})
train_dataset = dataset["train"].map(
prepare_examples,
batched=False,
remove_columns=column_names,
features=features
)
## Expected results
A clear and concise description of the expected results.
Expected would be similar to all the otherdatasets with no error.
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Unix
- Python version: 3.9
- PyArrow version: 9.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5025/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5025/timeline | null | completed | null | null | false | [
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessin... |
https://api.github.com/repos/huggingface/datasets/issues/2871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2871/comments | https://api.github.com/repos/huggingface/datasets/issues/2871/events | https://github.com/huggingface/datasets/issues/2871 | 989,436,088 | MDU6SXNzdWU5ODk0MzYwODg= | 2,871 | datasets.config.PYARROW_VERSION has no attribute 'major' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2021-09-06T21:06:57Z | 2021-09-08T08:51:52Z | 2021-09-08T08:51:52Z | null | In the test_dataset_common.py script, line 288-289
```
if datasets.config.PYARROW_VERSION.major < 3:
packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"]
```
which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested this on both datasets.__version_=='1.11.0' and '1.9.0'. I am using Mac OS.
```
import datasets
datasets.config.PYARROW_VERSION.major
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/var/folders/1f/0wqmlgp90qjd5mpj53fnjq440000gn/T/ipykernel_73361/2547517336.py in <module>
1 import datasets
----> 2 datasets.config.PYARROW_VERSION.major
AttributeError: 'str' object has no attribute 'major'
```
## Environment info
- `datasets` version: 1.11.0
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2871/timeline | null | completed | null | null | false | [
"I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.",
"Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simulta... |
https://api.github.com/repos/huggingface/datasets/issues/5993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5993/comments | https://api.github.com/repos/huggingface/datasets/issues/5993/events | https://github.com/huggingface/datasets/issues/5993 | 1,776,643,555 | I_kwDODunzps5p5W3j | 5,993 | ValueError: Table schema does not match schema used to create file | [] | closed | false | null | 2 | 2023-06-27T10:54:07Z | 2023-06-27T15:36:42Z | 2023-06-27T15:32:44Z | null | ### Describe the bug
Saving a dataset as parquet fails with a `ValueError: Table schema does not match schema used to create file` if the dataset was obtained out of a `.select_columns()` call with columns selected out of order.
### Steps to reproduce the bug
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_parquet("demo.parquet")
```
```shell
>>>
ValueError: Table schema does not match schema used to create file:
table:
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53 vs.
file:
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
```
---
I think this is because after the `.select_columns()` call with out of order columns, the output dataset features' schema ends up being out of sync with the schema of the arrow table backing it.
```python
ds.features.arrow_schema
>>>
x1: int64
x2: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x1": {"dtype": "int64", "_type": "V' + 53
ds.data.schema
>>>
x2: int64
x1: int64
-- schema metadata --
huggingface: '{"info": {"features": {"x2": {"dtype": "int64", "_type": "V' + 53
```
So when we call `.to_parquet()`, the call behind the scenes to `datasets.io.parquet.ParquetDatasetWriter(...).write()` which initialises the backend `pyarrow.parquet.ParquetWriter` with `schema = self.dataset.features.arrow_schema` triggers `pyarrow` on write when [it checks](https://github.com/apache/arrow/blob/11b140a734a516e436adaddaeb35d23f30dcce44/python/pyarrow/parquet/core.py#L1086-L1090) that the `ParquetWriter` schema matches the schema of the table being written 🙌
https://github.com/huggingface/datasets/blob/6ed837325cb539a5deb99129e5ad181d0269e050/src/datasets/io/parquet.py#L139-L141
### Expected behavior
The dataset gets successfully saved as parquet.
*In the same way as it does if saving it as csv:
```python
import datasets
dataset = datasets.Dataset.from_dict(
{
"x1": [1, 2, 3],
"x2": [10, 11, 12],
}
)
ds = dataset.select_columns(["x2", "x1"])
ds.to_csv("demo.csv")
```
### Environment info
`python==3.11`
`datasets==2.13.1`
| {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5993/timeline | null | completed | null | null | false | [
"We'll do a new release of `datasets` soon to make the fix available :)\r\n\r\nIn the meantime you can use `datasets` from source (main)",
"Thank you very much @lhoestq ! 🚀 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4869/comments | https://api.github.com/repos/huggingface/datasets/issues/4869/events | https://github.com/huggingface/datasets/pull/4869 | 1,345,513,758 | PR_kwDODunzps49hBGY | 4,869 | Fix typos in documentation | [] | closed | false | null | 1 | 2022-08-21T15:10:03Z | 2022-08-22T09:25:39Z | 2022-08-22T09:09:58Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4869/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4869.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4869",
"merged_at": "2022-08-22T09:09:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4869.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4869"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2595/comments | https://api.github.com/repos/huggingface/datasets/issues/2595/events | https://github.com/huggingface/datasets/issues/2595 | 937,483,120 | MDU6SXNzdWU5Mzc0ODMxMjA= | 2,595 | ModuleNotFoundError: No module named 'datasets.tasks' while importing common voice datasets | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-06T03:20:55Z | 2021-07-06T05:59:49Z | 2021-07-06T05:59:49Z | null | Error traceback:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-8-a7b592d3bca0> in <module>()
1 from datasets import load_dataset, load_metric
2
----> 3 common_voice_train = load_dataset("common_voice", "pa-IN", split="train+validation")
4 common_voice_test = load_dataset("common_voice", "pa-IN", split="test")
9 frames
/root/.cache/huggingface/modules/datasets_modules/datasets/common_voice/078d412587e9efeb0ae2e574da99c31e18844c496008d53dc5c60f4159ed639b/common_voice.py in <module>()
19
20 import datasets
---> 21 from datasets.tasks import AutomaticSpeechRecognition
22
23
ModuleNotFoundError: No module named 'datasets.tasks' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2595/timeline | null | completed | null | null | false | [
"Hi @profsatwinder.\r\n\r\nIt looks like you are using an old version of `datasets`. Please update it with `pip install -U datasets` and indicate if the problem persists.",
"@albertvillanova Thanks for the information. I updated it to 1.9.0 and the issue is resolved. Thanks again. "
] |
https://api.github.com/repos/huggingface/datasets/issues/652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/652/comments | https://api.github.com/repos/huggingface/datasets/issues/652/events | https://github.com/huggingface/datasets/pull/652 | 705,390,850 | MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx | 652 | handle connection error in download_prepared_from_hf_gcs | [] | closed | false | null | 0 | 2020-09-21T08:21:11Z | 2020-09-21T08:28:43Z | 2020-09-21T08:28:42Z | null | Fix #647 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/652/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/652/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/652.diff",
"html_url": "https://github.com/huggingface/datasets/pull/652",
"merged_at": "2020-09-21T08:28:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/652.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/652"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/959 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/959/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/959/comments | https://api.github.com/repos/huggingface/datasets/issues/959/events | https://github.com/huggingface/datasets/pull/959 | 754,418,610 | MDExOlB1bGxSZXF1ZXN0NTMwMzIxOTM1 | 959 | Add Tunizi Dataset | [] | closed | false | null | 0 | 2020-12-01T13:59:39Z | 2020-12-03T14:21:41Z | 2020-12-03T14:21:40Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/959/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/959/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/959.diff",
"html_url": "https://github.com/huggingface/datasets/pull/959",
"merged_at": "2020-12-03T14:21:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/959.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/959"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/5928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5928/comments | https://api.github.com/repos/huggingface/datasets/issues/5928/events | https://github.com/huggingface/datasets/pull/5928 | 1,744,098,371 | PR_kwDODunzps5SUXPC | 5,928 | Fix link to quickstart docs in README.md | [] | closed | false | null | 3 | 2023-06-06T15:23:01Z | 2023-06-06T15:52:34Z | 2023-06-06T15:43:53Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5928/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5928",
"merged_at": "2023-06-06T15:43:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5928"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5675/comments | https://api.github.com/repos/huggingface/datasets/issues/5675/events | https://github.com/huggingface/datasets/issues/5675 | 1,641,763,478 | I_kwDODunzps5h21KW | 5,675 | Filter datasets by language code | [] | closed | false | null | 4 | 2023-03-27T09:42:28Z | 2023-03-30T08:08:15Z | 2023-03-30T08:08:15Z | null | Hi! I use the language search field on https://huggingface.co/datasets
However, some of the datasets tagged by ISO language code are not accessible by this search form.
For example, [myv_ru_2022](https://huggingface.co/datasets/slone/myv_ru_2022) is has `myv` language tag but it is not included in Languages search form.
I've also noticed the same problem with `mhr` (see https://huggingface.co/datasets/AigizK/mari-russian-parallel-corpora) | {
"+1": 6,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 6,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5675/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5675/timeline | null | completed | null | null | false | [
"The dataset still can be found, if instead of using the search form you just enter the language code in the url, like https://huggingface.co/datasets?language=language:myv. \r\n\r\nBut of course having a more complete list of languages in the search form (or just a fallback to the language codes, if they are missi... |
https://api.github.com/repos/huggingface/datasets/issues/1363 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1363/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1363/comments | https://api.github.com/repos/huggingface/datasets/issues/1363/events | https://github.com/huggingface/datasets/pull/1363 | 760,160,944 | MDExOlB1bGxSZXF1ZXN0NTM1MDM4NjM0 | 1,363 | Adding OPUS MultiUN | [] | closed | false | null | 0 | 2020-12-09T09:29:01Z | 2020-12-09T17:54:20Z | 2020-12-09T17:54:20Z | null | Adding UnMulti
http://www.euromatrixplus.net/multi-un/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1363/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1363/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1363.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1363",
"merged_at": "2020-12-09T17:54:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1363.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1363"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/149/comments | https://api.github.com/repos/huggingface/datasets/issues/149/events | https://github.com/huggingface/datasets/issues/149 | 619,735,739 | MDU6SXNzdWU2MTk3MzU3Mzk= | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2020-05-17T15:42:39Z | 2020-05-18T17:01:46Z | 2020-05-18T17:01:46Z | null | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/149/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/149/timeline | null | completed | null | null | false | [
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for... |
https://api.github.com/repos/huggingface/datasets/issues/2460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2460/comments | https://api.github.com/repos/huggingface/datasets/issues/2460/events | https://github.com/huggingface/datasets/pull/2460 | 915,268,536 | MDExOlB1bGxSZXF1ZXN0NjY1MTAyMjA4 | 2,460 | Revert default in-memory for small datasets | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | {
"closed_at": "2021-06-08T18:51:04Z",
"closed_issues": 2,
"created_at": "2021-04-20T16:49:16Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-06-08T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/4",
"id": 6680642,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/4/labels",
"node_id": "MDk6TWlsZXN0b25lNjY4MDY0Mg==",
"number": 4,
"open_issues": 0,
"state": "closed",
"title": "1.8",
"updated_at": "2021-06-08T18:51:37Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/4"
} | 1 | 2021-06-08T17:14:23Z | 2021-06-08T18:04:14Z | 2021-06-08T17:55:43Z | null | Close #2458 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2460/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2460.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2460",
"merged_at": "2021-06-08T17:55:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2460.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2460"
} | true | [
"Thank you for this welcome change guys!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2362 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2362/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2362/comments | https://api.github.com/repos/huggingface/datasets/issues/2362/events | https://github.com/huggingface/datasets/pull/2362 | 892,100,749 | MDExOlB1bGxSZXF1ZXN0NjQ0ODYzOTQw | 2,362 | Fix web_nlg metadata | [] | closed | false | null | 3 | 2021-05-14T17:15:07Z | 2021-05-17T13:44:17Z | 2021-05-17T13:42:28Z | null | Our metadata storage system does not support `.` inside keys. cc @Pierrci
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2362/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2362/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2362.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2362",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2362.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2362"
} | true | [
"Hi ! `release_v2.1` and the others are dataset configuration names.\r\n\r\nThe configuration names are used to show the right code snippet in the UI to load the dataset.\r\nFor example if the parsing of the web_nlg tags worked correctly we would have:\r\n` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2909/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2909/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2909.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2909",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2909.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2909"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/844/comments | https://api.github.com/repos/huggingface/datasets/issues/844/events | https://github.com/huggingface/datasets/pull/844 | 741,835,661 | MDExOlB1bGxSZXF1ZXN0NTIwMDgwNzM5 | 844 | add newlines to amazon desc | [] | closed | false | null | 0 | 2020-11-12T18:41:20Z | 2020-11-12T18:42:25Z | 2020-11-12T18:42:21Z | null | Just a quick formatting fix to hopefully make it render nicer on Viewer | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/844/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/844/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/844.diff",
"html_url": "https://github.com/huggingface/datasets/pull/844",
"merged_at": "2020-11-12T18:42:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/844.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/844"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3544 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3544/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3544/comments | https://api.github.com/repos/huggingface/datasets/issues/3544/events | https://github.com/huggingface/datasets/issues/3544 | 1,095,784,681 | I_kwDODunzps5BUFjp | 3,544 | Ability to split a dataset in multiple files. | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 0 | 2022-01-06T23:02:25Z | 2022-01-06T23:02:25Z | null | null | Hello,
**Is your feature request related to a problem? Please describe.**
My use case is that I have one writer that adds columns and multiple workers reading the same `Dataset`. Each worker should have access to columns added by the writer when they reload the dataset.
I understand that we shouldn't overwrite an arrow file as this could cause Segfault and so on. Before 1.16, I was able to overwrite the dataset and that would work most of the time with some retries.
**Describe the solution you'd like**
I was thinking that if we could append `Dataset._data_files`, when the workers reload the Dataset, they would get the new columns.
**Describe alternatives you've considered**
I currently need to
1. Save multiple "versions" of the dataset and load the latest.
2. Try working with cache files to get the latest columns.
**Additional context**
I think this would be a great addition to HFDataset as Parquet supports multi-files input out of the box!
I can make a PR myself with some pointers as needed :) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3544/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3544/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/561 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/561/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/561/comments | https://api.github.com/repos/huggingface/datasets/issues/561/events | https://github.com/huggingface/datasets/pull/561 | 690,871,415 | MDExOlB1bGxSZXF1ZXN0NDc3Njk1NDQy | 561 | Made `share_dataset` more readable | [] | closed | false | null | 0 | 2020-09-02T09:34:48Z | 2020-09-03T09:00:30Z | 2020-09-03T09:00:29Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/561/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/561/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/561.diff",
"html_url": "https://github.com/huggingface/datasets/pull/561",
"merged_at": "2020-09-03T09:00:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/561.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/561"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/352 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/352/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/352/comments | https://api.github.com/repos/huggingface/datasets/issues/352/events | https://github.com/huggingface/datasets/pull/352 | 653,128,883 | MDExOlB1bGxSZXF1ZXN0NDQ2MTA1Mjky | 352 | 🐛[BugFix]fix seqeval | [] | closed | false | null | 7 | 2020-07-08T09:12:12Z | 2020-07-16T08:26:46Z | 2020-07-16T08:26:46Z | null | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/352/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/352/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/352.diff",
"html_url": "https://github.com/huggingface/datasets/pull/352",
"merged_at": "2020-07-16T08:26:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/352.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/352"
} | true | [
"I think this is good but can you detail a bit the behavior before and after your fix?",
"examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O',... |
https://api.github.com/repos/huggingface/datasets/issues/3095 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3095/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3095/comments | https://api.github.com/repos/huggingface/datasets/issues/3095/events | https://github.com/huggingface/datasets/issues/3095 | 1,027,453,146 | I_kwDODunzps49PbDa | 3,095 | `cast_column` makes audio decoding fail | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-10-15T13:36:58Z | 2023-04-07T09:43:20Z | 2021-10-15T15:38:30Z | null | ## Describe the bug
After changing the sampling rate automatic decoding fails.
## Steps to reproduce the bug
```python
from datasets import load_dataset
import datasets
ds = load_dataset("common_voice", "ab", split="train")
ds = ds.cast_column("audio", datasets.features.Audio(sampling_rate=16_000))
print(ds[0]["audio"]) # <- this fails currently
```
yields:
```
TypeError: forward() takes 2 positional arguments but 4 were given
```
## Expected results
no failure
## Actual results
Specify the actual results or traceback.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 1.13.2 (master)
- Platform: Linux-5.11.0-1019-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyArrow version: 5.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3095/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3095/timeline | null | completed | null | null | false | [
"cc @anton-l @albertvillanova ",
"Thanks for reporting, @patrickvonplaten.\r\n\r\nI think the issue is related to mp3 resampling, not to `cast_column`.\r\n\r\nYou can check that `cast_column` works OK with non-mp3 audio files:\r\n```python\r\nfrom datasets import load_dataset\r\nimport datasets\r\nds = load_datas... |
https://api.github.com/repos/huggingface/datasets/issues/4637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4637/comments | https://api.github.com/repos/huggingface/datasets/issues/4637/events | https://github.com/huggingface/datasets/issues/4637 | 1,294,818,236 | I_kwDODunzps5NLVu8 | 4,637 | The "all" split breaks streaming | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 6 | 2022-07-05T21:56:49Z | 2022-07-15T13:59:30Z | null | null | ## Describe the bug
Not sure if this is a bug or just the way streaming works, but setting `streaming=True` did not work when setting `split="all"`
## Steps to reproduce the bug
The following works:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all')
```
The following throws `ValueError: Bad split: all. Available splits: ['train', 'validation', 'test']`:
```python
ds = load_dataset('super_glue', 'wsc.fixed', split='all', streaming=True)
```
## Expected results
An iterator over all splits.
## Actual results
I had to do the following to achieve the desired result:
```python
from itertools import chain
ds = load_dataset('super_glue', 'wsc.fixed', streaming=True)
it = chain.from_iterable(ds.values())
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.3.2
- Platform: Linux-4.15.0-176-generic-x86_64-with-glibc2.31
- Python version: 3.10.5
- PyArrow version: 8.0.0
- Pandas version: 1.4.3
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4637/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4637/timeline | null | null | null | null | false | [
"Thanks for reporting @cakiki.\r\n\r\nYes, this is a bug. We are investigating it.",
"@albertvillanova Nice! Let me know if it's something I can fix my self; would love to contribtue!",
"@cakiki I was working on this but if you would like to contribute, go ahead. I will close my PR. ;)\r\n\r\nFor the moment I j... |
https://api.github.com/repos/huggingface/datasets/issues/3132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3132/comments | https://api.github.com/repos/huggingface/datasets/issues/3132/events | https://github.com/huggingface/datasets/issues/3132 | 1,032,505,430 | I_kwDODunzps49ishW | 3,132 | Support Audio feature in streaming mode | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-10-21T13:32:18Z | 2021-11-12T14:13:04Z | 2021-11-12T14:13:04Z | null | Currently, Audio feature is only supported for non-streaming datasets.
Due to the large size of many speech datasets, we should also support Audio feature in streaming mode.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3132/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3132/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2874/comments | https://api.github.com/repos/huggingface/datasets/issues/2874/events | https://github.com/huggingface/datasets/pull/2874 | 989,685,328 | MDExOlB1bGxSZXF1ZXN0NzI4Mzg2Mjg4 | 2,874 | Support streaming datasets that use pathlib | [] | closed | false | null | 3 | 2021-09-07T07:35:49Z | 2021-09-07T18:25:22Z | 2021-09-07T11:41:15Z | null | This PR extends the support in streaming mode for datasets that use `pathlib.Path`.
Related to: #2866.
CC: @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2874/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2874.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2874",
"merged_at": "2021-09-07T11:41:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2874.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2874"
} | true | [
"I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```",
"@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as... |
https://api.github.com/repos/huggingface/datasets/issues/5106 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5106/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5106/comments | https://api.github.com/repos/huggingface/datasets/issues/5106/events | https://github.com/huggingface/datasets/pull/5106 | 1,406,635,758 | PR_kwDODunzps5ArM6G | 5,106 | Fix task template reload from dict | [] | closed | false | null | 2 | 2022-10-12T18:33:49Z | 2022-10-13T09:59:07Z | 2022-10-13T09:56:51Z | null | Since #4926 the JSON dumps are simplified and it made task template dicts empty by default.
I fixed this by always including the task name which is needed to reload a task from a dict | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5106/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5106/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5106.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5106",
"merged_at": "2022-10-13T09:56:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5106.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5106"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Just wondering if there might be other data classes default values missed that could cause an issue... Apart from feature-like classes and tasks, I don't see any others though...\r\n\r\nI think we're good ! `asdict` is used on the ... |
https://api.github.com/repos/huggingface/datasets/issues/1632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1632/comments | https://api.github.com/repos/huggingface/datasets/issues/1632/events | https://github.com/huggingface/datasets/issues/1632 | 774,388,625 | MDU6SXNzdWU3NzQzODg2MjU= | 1,632 | SICK dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2020-12-24T12:40:14Z | 2021-02-05T15:49:25Z | 2021-02-05T15:49:25Z | null | Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you.
## Adding a Dataset
- **Name:** SICK
- **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical, syntactic, and semantic phenomena.
- **Paper:** https://www.aclweb.org/anthology/L14-1314/
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** This dataset is well-known in the NLP community used for recognizing entailment between sentences.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1632/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1632/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4012 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4012/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4012/comments | https://api.github.com/repos/huggingface/datasets/issues/4012/events | https://github.com/huggingface/datasets/pull/4012 | 1,180,350,083 | PR_kwDODunzps40_qgo | 4,012 | Rename wer to cer | [] | closed | false | null | 0 | 2022-03-25T05:06:05Z | 2022-03-28T13:57:25Z | 2022-03-28T13:57:25Z | null | wer variable changed to cer in README file
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4012/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4012/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4012.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4012",
"merged_at": "2022-03-28T13:57:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4012.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4012"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3695/comments | https://api.github.com/repos/huggingface/datasets/issues/3695/events | https://github.com/huggingface/datasets/pull/3695 | 1,129,730,148 | PR_kwDODunzps4yXP44 | 3,695 | Fix ClassLabel to/from dict when passed names_file | [] | closed | false | null | 0 | 2022-02-10T09:47:10Z | 2022-02-11T23:02:32Z | 2022-02-11T23:02:31Z | null | Currently, `names_file` is a field of the data class `ClassLabel`, thus appearing when transforming it to dict (when saving infos). Afterwards, when trying to read it from infos, it conflicts with the other field `names`.
This PR, removes `names_file` as a field of the data class `ClassLabel`.
- it is only used at instantiation to generate the `labels` field
Fix #3631. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3695/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3695/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3695.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3695",
"merged_at": "2022-02-11T23:02:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3695.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3695"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4834/comments | https://api.github.com/repos/huggingface/datasets/issues/4834/events | https://github.com/huggingface/datasets/pull/4834 | 1,336,993,511 | PR_kwDODunzps49FJOu | 4,834 | Fix documentation card of recipe_nlg dataset | [] | closed | false | null | 1 | 2022-08-12T09:49:39Z | 2022-08-12T11:28:18Z | 2022-08-12T11:13:40Z | null | Fix documentation card of recipe_nlg dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4834/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4834.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4834",
"merged_at": "2022-08-12T11:13:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4834.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4834"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.