url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2860/comments | https://api.github.com/repos/huggingface/datasets/issues/2860/events | https://github.com/huggingface/datasets/issues/2860 | 985,013,339 | MDU6SXNzdWU5ODUwMTMzMzk= | 2,860 | Cannot download TOTTO dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-01T11:04:10Z | 2021-09-02T06:47:40Z | 2021-09-02T06:47:40Z | null | Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip
`datasets version: 1.11.0`
# How to reproduce:
```py
from datasets import load_dataset
dataset = load_dataset('totto')
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2860/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2860/timeline | null | completed | null | null | false | [
"Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it."
] |
https://api.github.com/repos/huggingface/datasets/issues/5874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5874/comments | https://api.github.com/repos/huggingface/datasets/issues/5874/events | https://github.com/huggingface/datasets/issues/5874 | 1,715,708,930 | I_kwDODunzps5mQ6QC | 5,874 | Using as_dataset on a "parquet" builder | [] | closed | false | null | 1 | 2023-05-18T14:09:03Z | 2023-05-31T13:23:55Z | 2023-05-31T13:23:55Z | null | ### Describe the bug
I used a custom builder to ``download_and_prepare`` a dataset. The first (very minor) issue is that the doc seems to suggest ``download_and_prepare`` will return the dataset, while it does not ([builder.py](https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L718-L738)).
```
>>> from datasets import load_dataset_builder
>>> builder = load_dataset_builder("rotten_tomatoes")
>>> ds = builder.download_and_prepare("./output_dir", file_format="parquet")
```
The main issue I am facing is loading the dataset from those parquet files. I used the `as_dataset` method suggested by the doc, however it returns:
`
FileNotFoundError: [Errno 2] Failed to open local file 'output_dir/__main__-train-00000-of-00245.arrow'. Detail:
[errno 2] No such file or directory.
`
### Steps to reproduce the bug
1. Create a custom builder of some sort: `builder = CustomBuilder()`.
2. Run `download_and_prepare` with the parquet format: `builder.download_and_prepare("./output_dir", file_format="parquet")`.
3. Run `dataset = builder.as_dataset()`.
### Expected behavior
I guess I'd expect `as_dataset` to generate the dataset in arrow format if it has to, or to suggest an alternative way to load the dataset (I've also tried other methods with `load_dataset` to no avail, probably due to misunderstandings on my part).
### Environment info
```
- `datasets` version: 2.12.0
- Platform: Linux-5.15.0-1027-gcp-x86_64-with-glibc2.31
- Python version: 3.10.0
- Huggingface_hub version: 0.14.1
- PyArrow version: 8.0.0
- Pandas version: 1.5.3
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5874/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5874/timeline | null | completed | null | null | false | [
"Hi! You can refer to [this doc](https://huggingface.co/docs/datasets/filesystems#load-and-save-your-datasets-using-your-cloud-storage-filesystem) to see the intended usage (basically, it skips the Arrow -> Parquet conversion step in `ds = load_dataset(...); ds.to_parquet(\"path/to/parquet\")`) and allows writing P... |
https://api.github.com/repos/huggingface/datasets/issues/5841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5841/comments | https://api.github.com/repos/huggingface/datasets/issues/5841/events | https://github.com/huggingface/datasets/issues/5841 | 1,705,286,639 | I_kwDODunzps5lpJvv | 5,841 | Abusurdly slow on iteration | [] | closed | false | null | 4 | 2023-05-11T08:04:09Z | 2023-05-15T15:38:13Z | 2023-05-15T15:38:13Z | null | ### Describe the bug
I am attempting to iterate through an image dataset, but I am encountering a significant slowdown in the iteration speed. In order to investigate this issue, I conducted the following experiment:
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
I noticed that the dataset in numpy format performs significantly faster than the one in torch format. My hypothesis is that the dataset undergoes a transformation process of torch->python->numpy(torch) in the background, which might be causing the slowdown. Is there any way to expedite the process by bypassing such transformations?
Furthermore, if I increase the size of a to an image shape, like:
```python
a=torch.randn(3,224,224)
```
the iteration speed becomes absurdly slow, around 100 iterations per second, whereas the speed with numpy format is approximately 250 iterations per second. This level of speed would be unacceptable for large image datasets, as it could take several hours just to iterate through a single epoch.
### Steps to reproduce the bug
```python
a=torch.randn(100,224)
a=torch.stack([a] * 10000)
a.shape
# %%
ds=Dataset.from_dict({"tensor":a})
for i in tqdm(ds.with_format("numpy")):
pass
for i in tqdm(ds.with_format("torch")):
pass
```
### Expected behavior
iteration faster
### Environment info
- `datasets` version: 2.11.0
- Platform: Linux-5.4.0-148-generic-x86_64-with-glibc2.10
- Python version: 3.8.16
- Huggingface_hub version: 0.13.4
- PyArrow version: 11.0.0
- Pandas version: 2.0.0 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5841/timeline | null | completed | null | null | false | [
"Hi ! You can try to use the [Image](https://huggingface.co/docs/datasets/v2.12.0/en/package_reference/main_classes#datasets.Image) type which [decodes images on-the-fly](https://huggingface.co/docs/datasets/v2.12.0/en/about_dataset_features#image-feature) into pytorch tensors :)\r\n\r\n```python\r\nds = Dataset.fr... |
https://api.github.com/repos/huggingface/datasets/issues/5827 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5827/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5827/comments | https://api.github.com/repos/huggingface/datasets/issues/5827/events | https://github.com/huggingface/datasets/issues/5827 | 1,698,891,246 | I_kwDODunzps5lQwXu | 5,827 | load json dataset interrupt when dtype cast problem occured | [] | open | false | null | 1 | 2023-05-07T04:52:09Z | 2023-05-10T12:32:28Z | null | null | ### Describe the bug
i have a json like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3},
....
]
,which have several problematic rows data like row 2, then i load it with datasets.load_dataset('json', data_files=['xx.json'], split='train'), it will report like this:
Generating train split: 0 examples [00:00, ? examples/s]Failed to read file 'C:\Users\gawinjunwu\Downloads\test\data\a.json' with error <class 'pyarrow.lib.ArrowInvalid'>: Could not convert '2' with type str: tried to convert to int64
Traceback (most recent call last):
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1858, in _prepare_split_single
for _, table in generator:
File "D:\Python3.9\lib\site-packages\datasets\packaged_modules\json\json.py", line 146, in _generate_tables
raise ValueError(f"Not able to read records in the JSON file at {file}.") from None
ValueError: Not able to read records in the JSON file at C:\Users\gawinjunwu\Downloads\test\data\a.json.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "c:\Users\gawinjunwu\Downloads\test\scripts\a.py", line 4, in <module>
ds = load_dataset('json', data_dir='data', split='train')
File "D:\Python3.9\lib\site-packages\datasets\load.py", line 1797, in load_dataset
builder_instance.download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 890, in download_and_prepare
self._download_and_prepare(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 985, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1746, in _prepare_split
for job_id, done, content in self._prepare_split_single(
File "D:\Python3.9\lib\site-packages\datasets\builder.py", line 1891, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset.
Could datasets skip those problematic data row?
### Steps to reproduce the bug
prepare a json file like this:
[
{"id": 1, "name": 1},
{"id": 2, "name": "Nan"},
{"id": 3, "name": 3}
]
then use datasets.load_dataset('json', dir_files=['xxx.json']) to load the json file
### Expected behavior
skip the problematic data row and load row1 and row3
### Environment info
python3.9 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5827/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5827/timeline | null | null | null | null | false | [
"Indeed the JSON dataset builder raises an error when it encounters an unexpected type.\r\n\r\nThere's an old PR open to add away to ignore such elements though, if it can help: https://github.com/huggingface/datasets/pull/2838"
] |
https://api.github.com/repos/huggingface/datasets/issues/5949 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5949/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5949/comments | https://api.github.com/repos/huggingface/datasets/issues/5949/events | https://github.com/huggingface/datasets/pull/5949 | 1,754,843,717 | PR_kwDODunzps5S4oPC | 5,949 | Replace metadata utils with `huggingface_hub`'s RepoCard API | [] | closed | false | null | 8 | 2023-06-13T13:03:19Z | 2023-06-27T16:47:51Z | 2023-06-27T16:38:32Z | null | Use `huggingface_hub`'s RepoCard API instead of `DatasetMetadata` for modifying the card's YAML, and deprecate `datasets.utils.metadata` and `datasets.utils.readme`.
After removing these modules, we can also delete `datasets.utils.resources` since the moon landing repo now stores its own version of these resources for the metadata UI.
PS: this change requires bumping `huggingface_hub` to 0.13.0 (Transformers requires 0.14.0, so should be ok) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5949/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5949/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5949.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5949",
"merged_at": "2023-06-27T16:38:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5949.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5949"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3684/comments | https://api.github.com/repos/huggingface/datasets/issues/3684/events | https://github.com/huggingface/datasets/pull/3684 | 1,125,133,664 | PR_kwDODunzps4yIOer | 3,684 | [fix]: iwslt2017 download urls | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 7 | 2022-02-06T07:56:55Z | 2022-09-22T16:20:19Z | 2022-09-22T16:20:18Z | null | Fixes #2076. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3684/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3684/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3684.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3684",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3684.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3684"
} | true | [
"Hi ! Thanks for the fix ! Do you know where this new URL comes from ?\r\n\r\nAlso we try to not use Google Drive if possible, since it has download quota limitations. Do you know if the data is available from another host than Google Drive ?",
"Oh, I found it just by following the link from the [IWSLT2017 homepa... |
https://api.github.com/repos/huggingface/datasets/issues/3347 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3347/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3347/comments | https://api.github.com/repos/huggingface/datasets/issues/3347/events | https://github.com/huggingface/datasets/pull/3347 | 1,067,738,902 | PR_kwDODunzps4vNthw | 3,347 | iter_archive for zip files | [] | closed | false | null | 1 | 2021-11-30T22:34:17Z | 2021-12-04T00:22:22Z | 2021-12-04T00:22:11Z | null | * In this PR, I added the option to iterate through zipfiles for `download_manager.py` only.
* Next PR will be the same applied to `streaming_download_manager.py`.
* Related issue #3272.
## Comments :
* There is no `.isreg()` equivalent in zipfile library to check if file is Regular so I used `.is_dir()` instead to skip directories.
* For now I got `streaming_download_manager.py` working for local zip files, but not for urls. I get the following error when I test it on an archive in google drive, so still working on it. `BlockSizeError: Got more bytes so far (>2112) than requested (22)`
## Tasks :
- [x] download_manager.py
- [ ] streaming_download_manager.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3347/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3347/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3347.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3347",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3347.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3347"
} | true | [
"And also don't always try streaming with Google Drive - it can have issues because of how Google Drive works (with quotas, restrictions, etc.) and it can indeed cause `BlockSizeError`.\r\n\r\nFeel free to host your test data elsewhere, such as in a dataset repository on https://huggingface.co (see [here](https://h... |
https://api.github.com/repos/huggingface/datasets/issues/4584 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4584/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4584/comments | https://api.github.com/repos/huggingface/datasets/issues/4584/events | https://github.com/huggingface/datasets/pull/4584 | 1,286,911,993 | PR_kwDODunzps46eVF7 | 4,584 | Add binary classification task IDs | [] | closed | false | null | 4 | 2022-06-28T07:30:39Z | 2023-01-26T09:27:53Z | 2023-01-26T09:27:52Z | null | As a precursor to aligning the task IDs in `datasets` and AutoTrain, we need a way to distinguish binary vs multiclass vs multilabel classification.
This PR adds binary classification to the task IDs to enable this.
Related AutoTrain issue: https://github.com/huggingface/autonlp-backend/issues/597
cc @abhishekkrthakur @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4584/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4584/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4584.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4584",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4584.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4584"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4584). All of your documentation changes will be reflected on that endpoint.",
"> Awesome thanks ! Can you add it to https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Types.ts first please ? This is where ... |
https://api.github.com/repos/huggingface/datasets/issues/1000 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1000/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1000/comments | https://api.github.com/repos/huggingface/datasets/issues/1000/events | https://github.com/huggingface/datasets/pull/1000 | 755,292,066 | MDExOlB1bGxSZXF1ZXN0NTMxMDMxMTE1 | 1,000 | UM005: Urdu <> English Translation Dataset | [] | closed | false | null | 0 | 2020-12-02T13:51:35Z | 2020-12-04T15:34:30Z | 2020-12-04T15:34:29Z | null | Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1000/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1000/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1000.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1000",
"merged_at": "2020-12-04T15:34:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1000.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1000"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2389/comments | https://api.github.com/repos/huggingface/datasets/issues/2389/events | https://github.com/huggingface/datasets/pull/2389 | 897,822,270 | MDExOlB1bGxSZXF1ZXN0NjQ5Nzc3MDMz | 2,389 | Insert task templates for text classification | [] | closed | false | null | 6 | 2021-05-21T08:36:26Z | 2021-05-28T15:28:58Z | 2021-05-28T15:26:28Z | null | This PR inserts text-classification templates for datasets with the following properties:
* Only one config
* At most two features of `(Value, ClassLabel)` type
Note that this misses datasets like `sentiment140` which only has `Value` type features - these will be handled in a separate PR | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2389/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2389/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2389.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2389",
"merged_at": "2021-05-28T15:26:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2389.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2389"
} | true | [
"Update: found a few datasets that slipped through the net. Adding them shortly!",
"You might have thought about this already, but would it make sense to use the `datasets.features.ClassLabel` values when possible instead of declaring the list once for the `feature` and once for the `template`?",
"> You might h... |
https://api.github.com/repos/huggingface/datasets/issues/6013 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6013/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6013/comments | https://api.github.com/repos/huggingface/datasets/issues/6013/events | https://github.com/huggingface/datasets/issues/6013 | 1,796,083,437 | I_kwDODunzps5rDg7t | 6,013 | [FR] `map` should reuse unchanged columns from the previous dataset to avoid disk usage | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
},
{
"color": "BDE59C",
"default": fals... | open | false | null | 1 | 2023-07-10T06:42:20Z | 2023-07-10T15:37:52Z | null | null | ### Feature request
Currently adding a new column with `map` will cause all the data in the dataset to be duplicated and stored/cached on the disk again. It should reuse unchanged columns.
### Motivation
This allows having datasets with different columns but sharing some basic columns. Currently, these datasets would become too expensive to store and one would need some kind of on-the-fly join; which also doesn't seem implemented.
### Your contribution
_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6013/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6013/timeline | null | null | null | null | false | [
"You can use the `remove_columns` parameter in `map` to avoid duplicating the columns (and save disk space) and then concatenate the original dataset with the map result:\r\n```python\r\nfrom datasets import concatenate_datasets\r\n# dummy example\r\nds_new = ds.map(lambda x: {\"new_col\": x[\"col\"] + 2}, remove_c... |
https://api.github.com/repos/huggingface/datasets/issues/5954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5954/comments | https://api.github.com/repos/huggingface/datasets/issues/5954/events | https://github.com/huggingface/datasets/pull/5954 | 1,756,572,994 | PR_kwDODunzps5S-hSP | 5,954 | Better filenotfound for gated | [] | closed | false | null | 3 | 2023-06-14T10:33:10Z | 2023-06-14T12:33:27Z | 2023-06-14T12:26:31Z | null | close https://github.com/huggingface/datasets/issues/5953
<img width="1292" alt="image" src="https://github.com/huggingface/datasets/assets/42851186/270fe5bc-1739-4878-b7bc-ab6d35336d4d">
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5954/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5954",
"merged_at": "2023-06-14T12:26:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5954"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5049 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5049/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5049/comments | https://api.github.com/repos/huggingface/datasets/issues/5049/events | https://github.com/huggingface/datasets/pull/5049 | 1,392,361,381 | PR_kwDODunzps4_7zOY | 5,049 | Add `kwargs` to `Dataset.from_generator` | [] | closed | false | null | 1 | 2022-09-30T12:24:27Z | 2022-10-03T11:00:11Z | 2022-10-03T10:58:15Z | null | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5049/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5049/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"merged_at": "2022-10-03T10:58:15Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1383/comments | https://api.github.com/repos/huggingface/datasets/issues/1383/events | https://github.com/huggingface/datasets/pull/1383 | 760,331,480 | MDExOlB1bGxSZXF1ZXN0NTM1MTgxMDQ2 | 1,383 | added conv ai 2 | [] | closed | false | null | 2 | 2020-12-09T13:30:12Z | 2020-12-13T18:54:42Z | 2020-12-13T18:54:41Z | null | Dataset : https://github.com/DeepPavlov/convai/tree/master/2018 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1383/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1383/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1383.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1383",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1383.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1383"
} | true | [
"@lhoestq Thank you for the suggestions. I added the changes to the branch and seems after rebasing it to master, all the commits previous commits got added. Should I create a new PR or should I keep this one only ? ",
"closing this one in favor of #1527 "
] |
https://api.github.com/repos/huggingface/datasets/issues/734 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/734/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/734/comments | https://api.github.com/repos/huggingface/datasets/issues/734/events | https://github.com/huggingface/datasets/pull/734 | 721,767,848 | MDExOlB1bGxSZXF1ZXN0NTAzNjMwMDcz | 734 | Fix GLUE metric description | [] | closed | false | null | 0 | 2020-10-14T20:44:14Z | 2020-10-15T09:27:43Z | 2020-10-15T09:27:42Z | null | Small typo: the description says translation instead of prediction. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/734/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/734/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/734.diff",
"html_url": "https://github.com/huggingface/datasets/pull/734",
"merged_at": "2020-10-15T09:27:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/734.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/734"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2764 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2764/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2764/comments | https://api.github.com/repos/huggingface/datasets/issues/2764/events | https://github.com/huggingface/datasets/pull/2764 | 962,554,799 | MDExOlB1bGxSZXF1ZXN0NzA1MzI3MDQ5 | 2,764 | Add DER metric for SUPERB speaker diarization task | [
{
"color": "E3165C",
"default": false,
"description": "",
"id": 4190228726,
"name": "transfer-to-evaluate",
"node_id": "LA_kwDODunzps75wdD2",
"url": "https://api.github.com/repos/huggingface/datasets/labels/transfer-to-evaluate"
}
] | closed | false | null | 1 | 2021-08-06T09:12:36Z | 2023-07-11T09:35:23Z | 2023-07-11T09:35:23Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2764/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2764/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/2764.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2764",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2764.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2764"
} | true | [
"Metrics are deprecated in `datasets` and `evaluate` should be used instead: https://github.com/huggingface/evaluate"
] |
https://api.github.com/repos/huggingface/datasets/issues/2072 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2072/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2072/comments | https://api.github.com/repos/huggingface/datasets/issues/2072/events | https://github.com/huggingface/datasets/pull/2072 | 834,054,837 | MDExOlB1bGxSZXF1ZXN0NTk0OTQ5NjA4 | 2,072 | Fix docstring issues | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 2 | 2021-03-17T18:13:44Z | 2021-03-24T08:20:57Z | 2021-03-18T12:41:21Z | null | Fix docstring issues. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2072/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2072/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2072.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2072",
"merged_at": "2021-03-18T12:41:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2072.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2072"
} | true | [
"I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?",
"Sounds good thanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/3501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3501/comments | https://api.github.com/repos/huggingface/datasets/issues/3501/events | https://github.com/huggingface/datasets/pull/3501 | 1,090,413,758 | PR_kwDODunzps4wXM8H | 3,501 | Update pib dataset card | [] | closed | false | null | 0 | 2021-12-29T10:14:40Z | 2021-12-29T11:13:21Z | 2021-12-29T11:13:21Z | null | Related to #3496 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3501/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3501/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3501.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3501",
"merged_at": "2021-12-29T11:13:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3501.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3501"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2357 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2357/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2357/comments | https://api.github.com/repos/huggingface/datasets/issues/2357/events | https://github.com/huggingface/datasets/pull/2357 | 890,595,693 | MDExOlB1bGxSZXF1ZXN0NjQzNTk0NDcz | 2,357 | Adding Microsoft CodeXGlue Datasets | [] | closed | false | null | 16 | 2021-05-13T00:43:01Z | 2021-06-08T09:29:57Z | 2021-06-08T09:29:57Z | null | Hi there, this is a new pull request to get the CodeXGlue datasets into the awesome HF datasets lib. Most of the work has been done in this PR #997 by the awesome @madlag. However, that PR has been stale for a while now and so I spoke with @lhoestq about finishing up the final mile and so he told me to open a new PR with the final changes :smile:.
I believe I've met all of the changes still left in the old PR to do, except for the change to the languages. I believe the READMEs should include the different programming languages used rather than just using the tag "code" as when searching for datasets, SE researchers may specifically be looking only for what type of programming language and so being able to quickly filter will be very valuable. Let me know what you think of that or if you still believe it should be the "code" tag @lhoestq. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2357/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2357/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2357.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2357",
"merged_at": "2021-06-08T09:29:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2357.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2357"
} | true | [
"Oh one other thing. Mentioned in the PR was that I would need to regenerate the dataset_infos.json once the camel casing was done. However, I am unsure why this is the case since there is no reference to any object names in the dataset_infos.json file.\r\n\r\nIf it needs to be reran, I can try it do it on my own m... |
https://api.github.com/repos/huggingface/datasets/issues/572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/572/comments | https://api.github.com/repos/huggingface/datasets/issues/572/events | https://github.com/huggingface/datasets/pull/572 | 692,598,231 | MDExOlB1bGxSZXF1ZXN0NDc5MTgyNDU3 | 572 | Add CLUE Benchmark (11 datasets) | [] | closed | false | null | 3 | 2020-09-04T01:57:40Z | 2020-09-07T09:59:11Z | 2020-09-07T09:59:10Z | null | Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/572/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/572.diff",
"html_url": "https://github.com/huggingface/datasets/pull/572",
"merged_at": "2020-09-07T09:59:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/572.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/572"
} | true | [
"Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https://github.com/huggingface/nlp/pull/572/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? ",
"I believe CI failure is unrelated.",
"Great job! "
] |
https://api.github.com/repos/huggingface/datasets/issues/2573 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2573/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2573/comments | https://api.github.com/repos/huggingface/datasets/issues/2573/events | https://github.com/huggingface/datasets/issues/2573 | 934,584,745 | MDU6SXNzdWU5MzQ1ODQ3NDU= | 2,573 | Finding right block-size with JSON loading difficult for user | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 1 | 2021-07-01T08:48:35Z | 2021-07-01T19:10:53Z | null | null | As reported by @thomwolf, while loading a JSON Lines file with "json" loading script, he gets
> json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 383)
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2573/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2573/timeline | null | null | null | null | false | [
"This was actually a second error arising from a too small block-size in the json reader.\r\n\r\nFinding the right block size is difficult for the layman user"
] |
https://api.github.com/repos/huggingface/datasets/issues/4220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4220/comments | https://api.github.com/repos/huggingface/datasets/issues/4220/events | https://github.com/huggingface/datasets/pull/4220 | 1,215,225,802 | PR_kwDODunzps42w5YO | 4,220 | Altered faiss installation comment | [] | closed | false | null | 3 | 2022-04-26T01:20:43Z | 2022-05-09T17:29:34Z | 2022-05-09T17:22:09Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4220/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4220/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4220.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4220",
"merged_at": "2022-05-09T17:22:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4220.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4220"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi ! Can you explain why this change is needed ?",
"Facebook recommends installing FAISS using conda (https://github.com/facebookresearch/faiss/blob/main/INSTALL.md). pip does not seem to have the latest version of FAISS. The lates... |
https://api.github.com/repos/huggingface/datasets/issues/541 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/541/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/541/comments | https://api.github.com/repos/huggingface/datasets/issues/541/events | https://github.com/huggingface/datasets/issues/541 | 688,521,224 | MDU6SXNzdWU2ODg1MjEyMjQ= | 541 | Best practices for training tokenizers with nlp | [] | closed | false | null | 1 | 2020-08-29T12:06:49Z | 2022-10-04T17:28:04Z | 2022-10-04T17:28:04Z | null | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/541/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/541/timeline | null | completed | null | null | false | [
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] |
https://api.github.com/repos/huggingface/datasets/issues/2897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2897/comments | https://api.github.com/repos/huggingface/datasets/issues/2897/events | https://github.com/huggingface/datasets/pull/2897 | 993,798,386 | MDExOlB1bGxSZXF1ZXN0NzMxOTA0ODk4 | 2,897 | Add OpenAI's HumanEval dataset | [] | closed | false | null | 1 | 2021-09-11T09:37:47Z | 2021-09-16T15:02:11Z | 2021-09-16T15:02:11Z | null | This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2897/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2897/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2897.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2897",
"merged_at": "2021-09-16T15:02:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2897.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2897"
} | true | [
"I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/986 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/986/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/986/comments | https://api.github.com/repos/huggingface/datasets/issues/986/events | https://github.com/huggingface/datasets/pull/986 | 755,047,470 | MDExOlB1bGxSZXF1ZXN0NTMwODM0MzYx | 986 | Add SciTLDR Dataset | [] | closed | false | null | 5 | 2020-12-02T08:11:16Z | 2020-12-02T18:37:22Z | 2020-12-02T18:02:59Z | null | Adds the SciTLDR Dataset by AI2
Added README card with tags to the best of my knowledge
Multi-target summaries or TLDRs of Scientific Documents | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/986/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/986/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/986.diff",
"html_url": "https://github.com/huggingface/datasets/pull/986",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/986.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/986"
} | true | [
"CI failures seem to be unrelated (related to `norwegian_ner`)\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::t... |
https://api.github.com/repos/huggingface/datasets/issues/3261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3261/comments | https://api.github.com/repos/huggingface/datasets/issues/3261/events | https://github.com/huggingface/datasets/issues/3261 | 1,052,346,381 | I_kwDODunzps4-uYgN | 3,261 | Scifi_TV_Shows: Having trouble getting viewer to find appropriate files | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2021-11-12T19:25:19Z | 2021-12-21T10:24:10Z | 2021-12-21T10:24:10Z | null | ## Dataset viewer issue for '*Science Fiction TV Show Plots Corpus (Scifi_TV_Shows)*'
**Link:** [link](https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows)
I tried adding both a script (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/blob/main/Scifi_TV_Shows.py) and some dummy examples (https://huggingface.co/datasets/lara-martin/Scifi_TV_Shows/tree/main/dummy), but the viewer still has a 404 error ("Not found. Maybe the cache is missing, or maybe the ressource does not exist."). I'm not sure what to try next. Thanks in advance!
Am I the one who added this dataset? Yes
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3261/timeline | null | completed | null | null | false | [
"Hi ! I think this is because `iter_archive` doesn't support ZIP files yet. See https://github.com/huggingface/datasets/issues/3272\r\n\r\nYou can navigate into the archive this way instead:\r\n```python\r\n# in split_generators\r\ndata_dir = dl_manager.download_and_extract(url)\r\ntrain_filepath = os.path.join(dat... |
https://api.github.com/repos/huggingface/datasets/issues/2921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2921/comments | https://api.github.com/repos/huggingface/datasets/issues/2921/events | https://github.com/huggingface/datasets/issues/2921 | 997,325,424 | I_kwDODunzps47cfpw | 2,921 | Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values" | [] | closed | false | null | 0 | 2021-09-15T17:12:11Z | 2021-09-15T17:21:45Z | 2021-09-15T17:21:45Z | null | This error has been introduced in https://github.com/huggingface/datasets/pull/2361
To reproduce:
```python
import numpy as np
from datasets import Dataset
d = Dataset.from_dict({"a": [np.zeros((2, 2))]})
```
raises
```python
Traceback (most recent call last):
File "playground/ttest.py", line 5, in <module>
d = Dataset.from_dict({"a": [np.zeros((2, 2))]}).with_format("torch")
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_dataset.py", line 458, in from_dict
pa_table = InMemoryTable.from_pydict(mapping=mapping)
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/table.py", line 365, in from_pydict
return cls(pa.Table.from_pydict(*args, **kwargs))
File "pyarrow/table.pxi", line 1639, in pyarrow.lib.Table.from_pydict
File "pyarrow/array.pxi", line 332, in pyarrow.lib.asarray
File "pyarrow/array.pxi", line 223, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/arrow_writer.py", line 107, in __arrow_array__
out = pa.array(self.data, type=type)
File "pyarrow/array.pxi", line 306, in pyarrow.lib.array
File "pyarrow/array.pxi", line 39, in pyarrow.lib._sequence_to_array
File "pyarrow/error.pxi", line 143, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Can only convert 1-dimensional array values | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2921/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2921/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4830/comments | https://api.github.com/repos/huggingface/datasets/issues/4830/events | https://github.com/huggingface/datasets/pull/4830 | 1,336,177,937 | PR_kwDODunzps49Cdro | 4,830 | Fix task tags in dataset cards | [] | closed | false | null | 2 | 2022-08-11T16:06:06Z | 2022-08-11T16:37:27Z | 2022-08-11T16:23:00Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4830/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"merged_at": "2022-08-11T16:23:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
https://api.github.com/repos/huggingface/datasets/issues/2857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2857/comments | https://api.github.com/repos/huggingface/datasets/issues/2857/events | https://github.com/huggingface/datasets/pull/2857 | 984,093,938 | MDExOlB1bGxSZXF1ZXN0NzIzNTY5OTE4 | 2,857 | Update: Openwebtext - update size | [] | closed | false | null | 1 | 2021-08-31T17:11:03Z | 2022-02-15T10:38:03Z | 2021-09-07T09:44:32Z | null | Update the size of the Openwebtext dataset
I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples)
Close #2839, close #726. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2857/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2857/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2857.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2857",
"merged_at": "2021-09-07T09:44:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2857.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2857"
} | true | [
"merging since the CI error in unrelated to this PR and fixed on master"
] |
https://api.github.com/repos/huggingface/datasets/issues/4628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4628/comments | https://api.github.com/repos/huggingface/datasets/issues/4628/events | https://github.com/huggingface/datasets/pull/4628 | 1,293,361,308 | PR_kwDODunzps46zvFJ | 4,628 | Fix time type `_arrow_to_datasets_dtype` conversion | [] | closed | false | null | 1 | 2022-07-04T16:20:15Z | 2022-07-07T14:08:38Z | 2022-07-07T13:57:12Z | null | Fix #4620
The issue stems from the fact that `pa.array([time_data]).type` returns `DataType(time64[unit])`, which doesn't expose the `unit` attribute, instead of `Time64Type(time64[unit])`. I believe this is a bug in PyArrow. Luckily, the both types have the same `str()`, so in this PR I call `pa.type_for_alias(str(type))` to convert them both to the `Time64Type(time64[unit])` format.
cc @severo | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4628/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4628/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4628.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4628",
"merged_at": "2022-07-07T13:57:11Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4628.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4628"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4369/comments | https://api.github.com/repos/huggingface/datasets/issues/4369/events | https://github.com/huggingface/datasets/pull/4369 | 1,240,245,642 | PR_kwDODunzps44CpCe | 4,369 | Add redirect to dataset script in the repo structure page | [] | closed | false | null | 1 | 2022-05-18T17:05:33Z | 2022-05-19T08:19:01Z | 2022-05-19T08:10:51Z | null | Following https://github.com/huggingface/hub-docs/pull/146 I added a redirection to the dataset scripts documentation in the repository structure page. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4369/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4369/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4369.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4369",
"merged_at": "2022-05-19T08:10:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4369.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4369"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/4588 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4588/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4588/comments | https://api.github.com/repos/huggingface/datasets/issues/4588/events | https://github.com/huggingface/datasets/pull/4588 | 1,287,368,751 | PR_kwDODunzps46f2kF | 4,588 | Host head_qa data on the Hub and fix NonMatchingChecksumError | [] | closed | false | null | 3 | 2022-06-28T13:39:28Z | 2022-07-05T16:01:15Z | 2022-07-05T15:49:52Z | null | This PR:
- Hosts head_qa data on the Hub instead of Google Drive
- Fixes NonMatchingChecksumError
Fix https://huggingface.co/datasets/head_qa/discussions/1 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4588/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4588/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4588.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4588",
"merged_at": "2022-07-05T15:49:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4588.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4588"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi @albertvillanova ! Thanks for the fix ;)\r\nCan I safely checkout from this branch to build `datasets` or it is preferable to wait until all CI tests pass?\r\nThanks 🙏 ",
"@younesbelkada we have just merged this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/2038 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2038/comments | https://api.github.com/repos/huggingface/datasets/issues/2038/events | https://github.com/huggingface/datasets/issues/2038 | 830,036,875 | MDU6SXNzdWU4MzAwMzY4NzU= | 2,038 | outdated dataset_infos.json might fail verifications | [] | closed | false | null | 2 | 2021-03-12T11:41:54Z | 2021-03-16T16:27:40Z | 2021-03-16T16:27:40Z | null | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2038/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] |
https://api.github.com/repos/huggingface/datasets/issues/727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/727/comments | https://api.github.com/repos/huggingface/datasets/issues/727/events | https://github.com/huggingface/datasets/issues/727 | 719,386,366 | MDU6SXNzdWU3MTkzODYzNjY= | 727 | Parallel downloads progress bar flickers | [] | open | false | null | 0 | 2020-10-12T13:36:05Z | 2020-10-12T13:36:05Z | null | null | When there are parallel downloads using the download manager, the tqdm progress bar flickers since all the progress bars are on the same line.
To fix that we could simply specify `position=i` for i=0 to n the number of files to download when instantiating the tqdm progress bar.
Another way would be to have one "master" progress bar that tracks the number of finished downloads, and then one progress bar per process that show the current downloads. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/727/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5589 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5589/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5589/comments | https://api.github.com/repos/huggingface/datasets/issues/5589/events | https://github.com/huggingface/datasets/pull/5589 | 1,603,535,704 | PR_kwDODunzps5K9K1i | 5,589 | Revert "pass the dataset features to the IterableDataset.from_generator" | [] | closed | false | null | 5 | 2023-02-28T17:52:04Z | 2023-03-21T14:21:45Z | 2023-03-21T14:18:18Z | null | This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily)
It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it
cc @mariosasko @Hubert-Bonisseur | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5589/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5589/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5589.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5589",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5589.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5589"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1663/comments | https://api.github.com/repos/huggingface/datasets/issues/1663/events | https://github.com/huggingface/datasets/pull/1663 | 775,914,320 | MDExOlB1bGxSZXF1ZXN0NTQ2NTAzMjg5 | 1,663 | update saving and loading methods for faiss index so to accept path l… | [] | closed | false | null | 1 | 2020-12-29T14:15:37Z | 2021-01-18T09:27:23Z | 2021-01-18T09:27:23Z | null | - Update saving and loading methods for faiss index so to accept path like objects from pathlib
The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes becomes a more intuitive this way I think. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1663/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1663.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1663",
"merged_at": "2021-01-18T09:27:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1663.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1663"
} | true | [
"Seems ok for me, what do you think @lhoestq ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/2809 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2809/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2809/comments | https://api.github.com/repos/huggingface/datasets/issues/2809/events | https://github.com/huggingface/datasets/pull/2809 | 971,902,613 | MDExOlB1bGxSZXF1ZXN0NzEzNTc2Njcz | 2,809 | Add Beans Dataset | [] | closed | false | null | 0 | 2021-08-16T16:22:33Z | 2021-08-26T11:42:27Z | 2021-08-26T11:42:27Z | null | Adds the [beans](https://github.com/AI-Lab-Makerere/ibean/) image classification dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2809/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2809/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2809.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2809",
"merged_at": "2021-08-26T11:42:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2809.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2809"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4993 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4993/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4993/comments | https://api.github.com/repos/huggingface/datasets/issues/4993/events | https://github.com/huggingface/datasets/pull/4993 | 1,379,044,435 | PR_kwDODunzps4_QYas | 4,993 | fix: avoid casting tuples after Dataset.map | [] | closed | false | null | 1 | 2022-09-20T08:45:16Z | 2022-09-20T16:11:27Z | 2022-09-20T13:08:29Z | null | This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4993/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4993/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4993.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4993",
"merged_at": "2022-09-20T13:08:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4993.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4993"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/987 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/987/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/987/comments | https://api.github.com/repos/huggingface/datasets/issues/987/events | https://github.com/huggingface/datasets/pull/987 | 755,059,469 | MDExOlB1bGxSZXF1ZXN0NTMwODQ0MTQ4 | 987 | Add OPUS DOGC dataset | [] | closed | false | null | 1 | 2020-12-02T08:30:32Z | 2020-12-04T13:27:41Z | 2020-12-04T13:27:41Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/987/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/987/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/987.diff",
"html_url": "https://github.com/huggingface/datasets/pull/987",
"merged_at": "2020-12-04T13:27:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/987.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/987"
} | true | [
"merging since the CI is fixed on master"
] | |
https://api.github.com/repos/huggingface/datasets/issues/4580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4580/comments | https://api.github.com/repos/huggingface/datasets/issues/4580/events | https://github.com/huggingface/datasets/issues/4580 | 1,286,312,912 | I_kwDODunzps5Mq5PQ | 4,580 | Dataset Viewer issue for multi_news | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-06-27T20:25:25Z | 2022-06-28T14:08:48Z | 2022-06-28T14:08:48Z | null | ### Link
https://huggingface.co/datasets/multi_news
### Description
Not sure what the index error is referring to here:
```
Status code: 400
Exception: IndexError
Message: list index out of range
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4580/timeline | null | completed | null | null | false | [
"Thanks for reporting, @lewtun.\r\n\r\nI forced the refreshing of the preview and it worked OK for train and validation splits.\r\n\r\nI guess the error has to do with the data files being hosted at Google Drive: this gives errors when requested automatically using scripts.\r\nWe should host them to fix the error. ... |
https://api.github.com/repos/huggingface/datasets/issues/1988 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1988/comments | https://api.github.com/repos/huggingface/datasets/issues/1988/events | https://github.com/huggingface/datasets/issues/1988 | 822,324,605 | MDU6SXNzdWU4MjIzMjQ2MDU= | 1,988 | Readme.md is misleading about kinds of datasets? | [] | closed | false | null | 1 | 2021-03-04T17:04:20Z | 2021-08-04T18:05:23Z | 2021-08-04T18:05:23Z | null | Hi!
At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. "
But here:
https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117
You mention other kinds of datasets, with images and so on. I'm confused.
Is it possible to use it to store, say, imagenet locally? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1988/timeline | null | completed | null | null | false | [
"Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)"
] |
https://api.github.com/repos/huggingface/datasets/issues/1648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1648/comments | https://api.github.com/repos/huggingface/datasets/issues/1648/events | https://github.com/huggingface/datasets/pull/1648 | 775,542,360 | MDExOlB1bGxSZXF1ZXN0NTQ2MjAxNTQ0 | 1,648 | Update README.md | [] | closed | false | null | 0 | 2020-12-28T18:59:06Z | 2020-12-29T10:39:14Z | 2020-12-29T10:39:14Z | null | added dataset summary | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1648/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1648/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1648.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1648",
"merged_at": "2020-12-29T10:39:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1648.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1648"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3994 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3994/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3994/comments | https://api.github.com/repos/huggingface/datasets/issues/3994/events | https://github.com/huggingface/datasets/pull/3994 | 1,178,211,138 | PR_kwDODunzps404wWu | 3,994 | Change audio column from string path to Audio feature in ASR task | [] | closed | false | null | 0 | 2022-03-23T14:34:52Z | 2022-03-23T15:43:43Z | 2022-03-23T15:43:43Z | null | Will fix #3990 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3994/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3994/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3994.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3994",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3994.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3994"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3111/comments | https://api.github.com/repos/huggingface/datasets/issues/3111/events | https://github.com/huggingface/datasets/issues/3111 | 1,030,598,983 | I_kwDODunzps49bbFH | 3,111 | concatenate_datasets removes ClassLabel typing. | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-19T18:05:31Z | 2021-10-21T14:50:21Z | 2021-10-21T14:50:21Z | null | ## Describe the bug
When concatenating two datasets, we lose typing of ClassLabel columns.
I can work on this if this is a legitimate bug,
## Steps to reproduce the bug
```python
import datasets
from datasets import Dataset, ClassLabel, Value, concatenate_datasets
DS_LEN = 100
my_dataset = Dataset.from_dict(
{
"sentence": [f"{chr(i % 10)}" for i in range(DS_LEN)],
"label": [i % 2 for i in range(DS_LEN)]
}
)
my_predictions = Dataset.from_dict(
{
"pred": [(i + 1) % 2 for i in range(DS_LEN)]
}
)
my_dataset = my_dataset.cast(datasets.Features({"sentence": Value("string"), "label": ClassLabel(2, names=["POS", "NEG"])}))
print("Original")
print(my_dataset)
print(my_dataset.features)
concat_ds = concatenate_datasets([my_dataset, my_predictions], axis=1)
print("Concatenated")
print(concat_ds)
print(concat_ds.features)
```
## Expected results
The features of `concat_ds` should contain ClassLabel.
## Actual results
On master, I get:
```
{'sentence': Value(dtype='string', id=None), 'label': Value(dtype='int64', id=None), 'pred': Value(dtype='int64', id=None)}
```
## Environment info
- `datasets` version: 1.14.1.dev0
- Platform: macOS-10.15.7-x86_64-i386-64bit
- Python version: 3.8.11
- PyArrow version: 4.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3111/timeline | null | completed | null | null | false | [
"Something like this would fix it I think: https://github.com/huggingface/datasets/compare/master...Dref360:HF-3111/concatenate_types?expand=1"
] |
https://api.github.com/repos/huggingface/datasets/issues/2230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2230/comments | https://api.github.com/repos/huggingface/datasets/issues/2230/events | https://github.com/huggingface/datasets/issues/2230 | 859,817,159 | MDU6SXNzdWU4NTk4MTcxNTk= | 2,230 | Keys yielded while generating dataset are not being checked | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 9 | 2021-04-16T13:29:47Z | 2021-05-10T17:31:21Z | 2021-05-10T17:31:21Z | null | The keys used in the dataset generation script to ensure the same order is generated on every user's end should be checked for their types (i.e either `str` or `int`) as well as whether they are unique or not.
Currently, the keys are not being checked for any of these, as evident from `xnli' dataset generation:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/datasets/xnli/xnli.py#L196
Even after having a tuple as key, the dataset is generated without any warning.
Also, as tested in the case of `anli` dataset (I tweeked the dataset script to use `1` as a key for every example):
```
>>> import datasets
>>> nik = datasets.load_dataset('anli')
Downloading and preparing dataset anli/plain_text (download: 17.76 MiB, generated: 73.55 MiB, post-processed: Unknown size, total: 91.31 MiB) to C:\Users\nikhil\.cache\huggingface\datasets\anli\plain_text\0.1.0\43fa2c99c10bf8478f1fa0860f7b122c6b277c4c41306255b7641257cf4e3299...
0 examples [00:00, ? examples/s]1 {'uid': '0fd0abfb-659e-4453-b196-c3a64d2d8267', 'premise': 'The Parma trolleybus system (Italian: "Rete filoviaria di Parma" ) forms part of the public transport network of the city and "comune" of Parma, in the region of Emilia-Romagna, northern Italy. In operation since 1953, the system presently comprises four urban routes.', 'hypothesis': 'The trolleybus system has over 2 urban routes', 'label': 'entailment', 'reason': ''}
2021-04-16 12:38:14.483968: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library cudart64_110.dll
1 examples [00:01, 1.87s/ examples]1 {'uid': '7ed72ff4-40b7-4f8a-b1b9-6c612aa62c84', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Sharron Macready was a popular character through the 1980's.", 'label': 'neutral', 'reason': ''}
1 {'uid': '5d2930a3-62ac-485d-94d7-4e36cbbcd7b5', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': "Bastedo didn't keep any pets because of her views on animal rights.", 'label': 'neutral', 'reason': ''}
1 {'uid': '324db753-ddc9-4a85-a825-f09e2e5aebdd', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Alexandra Bastedo was named by her mother.', 'label': 'neutral', 'reason': ''}
1 {'uid': '4874f429-da0e-406a-90c7-22240ff3ddf8', 'premise': 'Alexandra Lendon Bastedo (9 March 1946 – 12 January 2014) was a British actress, best known for her role as secret agent Sharron Macready in the 1968 British espionage/science fiction adventure series "The Champions". She has been cited as a sex symbol of the 1960s and 1970s. Bastedo was a vegetarian and animal welfare advocate.', 'hypothesis': 'Bastedo cared for all the animals that inhabit the earth.', 'label': 'neutral', 'reason': ''}
```
Here also, the dataset was generated successfuly even hough it had same keys without any warning.
The reason appears to stem from here:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L988
Here, although it has access to every key, but it is not being checked and the example is written directly:
https://github.com/huggingface/datasets/blob/56346791aed417306d054d89bd693d6b7eab17f7/src/datasets/builder.py#L992
I would like to take this issue if you allow me. Thank You! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2230/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2230/timeline | null | completed | null | null | false | [
"Hi ! Indeed there's no verification on the uniqueness nor the types of the keys.\r\nDo you already have some ideas of what you would like to implement and how ?",
"Hey @lhoestq, thank you so much for the opportunity.\r\nAlthough I haven't had much experience with the HF Datasets code, after a careful look at how... |
https://api.github.com/repos/huggingface/datasets/issues/4282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4282/comments | https://api.github.com/repos/huggingface/datasets/issues/4282/events | https://github.com/huggingface/datasets/pull/4282 | 1,225,616,545 | PR_kwDODunzps43TZYL | 4,282 | Don't do unnecessary list type casting to avoid replacing None values by empty lists | [] | closed | false | null | 3 | 2022-05-04T16:37:01Z | 2022-05-06T10:43:58Z | 2022-05-06T10:37:00Z | null | In certain cases, `None` values are replaced by empty lists when casting feature types.
It happens every time you cast an array of nested lists like [None, [0, 1, 2, 3]] to a different type (to change the integer precision for example). In this case you'd get [[], [0, 1, 2, 3]] for example. This issue comes from PyArrow, see the discussion in https://github.com/huggingface/datasets/issues/3676
This issue also happens when no type casting is needed, because casting is supposed to be a no-op in this case. But as https://github.com/huggingface/datasets/issues/3676 shown, it's not the case and `None` are replaced by empty lists even if we cast to the exact same type.
In this PR I just workaround this bug in the case where no type casting is needed. In particular, I only call `pa.ListArray.from_arrays` only when necessary.
I also added a warning when some `None` are effectively replaced by empty lists. I wanted to raise an error in this case, but maybe we should wait a major update to do so
This PR fixes this particular case, that is occurring in `run_qa.py` in `transformers`:
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(4)})
ds = ds.map(lambda x: {"b": [[None, [0]]]}, batched=True, batch_size=1, remove_columns=["a"])
print(ds.to_pandas())
# before:
# b
# 0 [None, [0]]
# 1 [[], [0]]
# 2 [[], [0]]
# 3 [[], [0]]
#
# now:
# b
# 0 [None, [0]]
# 1 [None, [0]]
# 2 [None, [0]]
# 3 [None, [0]]
```
cc @sgugger | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4282/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4282",
"merged_at": "2022-05-06T10:37:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4282"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Quick question about the message in the warning. You say \"will be fixed in a future major version\" but don't you mean \"will raise an error in a future major version\"?",
"Right ! Good catch, thanks, I updated the message to say ... |
https://api.github.com/repos/huggingface/datasets/issues/2612 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2612/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2612/comments | https://api.github.com/repos/huggingface/datasets/issues/2612/events | https://github.com/huggingface/datasets/pull/2612 | 940,604,512 | MDExOlB1bGxSZXF1ZXN0Njg2NjUwMjk3 | 2,612 | Return Python float instead of numpy.float64 in sklearn metrics | [] | closed | false | {
"closed_at": "2021-07-21T15:36:49Z",
"closed_issues": 29,
"created_at": "2021-06-08T18:48:33Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"gravatar_id": "",
"html_url": "https://github.com/albertvillanova",
"id": 8515462,
"login": "albertvillanova",
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"site_admin": false,
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"type": "User",
"url": "https://api.github.com/users/albertvillanova"
},
"description": "Next minor release",
"due_on": "2021-08-05T07:00:00Z",
"html_url": "https://github.com/huggingface/datasets/milestone/6",
"id": 6836458,
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/6/labels",
"node_id": "MDk6TWlsZXN0b25lNjgzNjQ1OA==",
"number": 6,
"open_issues": 0,
"state": "closed",
"title": "1.10",
"updated_at": "2021-07-21T15:36:49Z",
"url": "https://api.github.com/repos/huggingface/datasets/milestones/6"
} | 3 | 2021-07-09T09:48:09Z | 2021-07-12T14:12:53Z | 2021-07-09T13:03:54Z | null | This PR converts the return type of all `sklearn` metrics to be Python `float` instead of `numpy.float64`.
The reason behind this is that our Hub evaluation framework relies on converting benchmark-specific metrics to YAML ([example](https://huggingface.co/datasets/autonlp/autonlp-benchmark-raft-neelalex__raft-test-neelalex__raft-predictions-3/blob/main/README.md#L11)) and the `numpy.float64` format produces garbage like:
```python
import yaml
from datasets import load_metric
metric = load_metric("accuracy")
score = metric.compute(predictions=[0,1], references=[0,1])
print(yaml.dump(score["accuracy"])) # output below
# !!python/object/apply:numpy.core.multiarray.scalar
# - !!python/object/apply:numpy.dtype
# args:
# - f8
# - false
# - true
# state: !!python/tuple
# - 3
# - <
# - null
# - null
# - null
# - -1
# - -1
# - 0
# - !!binary |
# AAAAAAAA8D8=
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2612/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2612/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2612.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2612",
"merged_at": "2021-07-09T13:03:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2612.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2612"
} | true | [
"I opened an issue on the `sklearn` repo to understand why `numpy.float64` is the default: https://github.com/scikit-learn/scikit-learn/discussions/20490",
"It could be surprising at first to use `tolist()` on numpy scalars but it works ^^",
"did the same for Pearsonr here: https://github.com/huggingface/datase... |
https://api.github.com/repos/huggingface/datasets/issues/6010 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6010/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6010/comments | https://api.github.com/repos/huggingface/datasets/issues/6010/events | https://github.com/huggingface/datasets/issues/6010 | 1,793,838,152 | I_kwDODunzps5q68xI | 6,010 | Improve `Dataset`'s string representation | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2023-07-07T16:38:03Z | 2023-07-16T13:00:18Z | null | null | Currently, `Dataset.__repr__` outputs a dataset's column names and the number of rows. We could improve it by printing its features and the first few rows.
We should also implement `_repr_html_` to have a rich HTML representation in notebooks/Streamlit. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6010/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6010/timeline | null | null | null | null | false | [
"I want to take a shot at this if possible ",
"Yes, feel free to work on this.\r\n\r\nYou can check the PyArrow Table `__repr__` and Polars DataFrame `__repr__`/`_repr_html_` implementations for some pointers/ideas."
] |
https://api.github.com/repos/huggingface/datasets/issues/2029 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2029/comments | https://api.github.com/repos/huggingface/datasets/issues/2029/events | https://github.com/huggingface/datasets/issues/2029 | 829,097,290 | MDU6SXNzdWU4MjkwOTcyOTA= | 2,029 | Loading a faiss index KeyError | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 4 | 2021-03-11T12:16:13Z | 2021-03-12T00:21:09Z | 2021-03-12T00:21:09Z | null | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (dataset2) with the same text and label information as dataset1
6. Try to load the faiss index from file to dataset2
7. Get `KeyError: "Column embeddings not in the dataset"`
I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU.
https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing
Ubuntu Version
VERSION="18.04.5 LTS (Bionic Beaver)"
datasets==1.4.1
faiss==1.5.3
faiss-gpu==1.7.0
torch==1.8.0+cu101
transformers==4.3.3
NVIDIA-SMI 460.56
Driver Version: 460.32.03
CUDA Version: 11.2
Tesla K80
I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index
I included the exact code from the documentation at the end of the notebook to show that they don't work either.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2029/timeline | null | completed | null | null | false | [
"In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r... |
https://api.github.com/repos/huggingface/datasets/issues/4419 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4419/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4419/comments | https://api.github.com/repos/huggingface/datasets/issues/4419/events | https://github.com/huggingface/datasets/issues/4419 | 1,252,652,896 | I_kwDODunzps5Kqfdg | 4,419 | Update `unittest` assertions over tuples from `assertEqual` to `assertTupleEqual` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 3 | 2022-05-30T12:13:18Z | 2022-09-30T16:01:37Z | 2022-09-30T16:01:37Z | null | **Is your feature request related to a problem? Please describe.**
So this is more a readability improvement rather than a proposal, wouldn't it be better to use `assertTupleEqual` over the tuples rather than `assertEqual`? As `unittest` added that function in `v3.1`, as detailed at https://docs.python.org/3/library/unittest.html#unittest.TestCase.assertTupleEqual, so maybe it's worth updating.
Find an example of an `assertEqual` over a tuple in 🤗 `datasets` unit tests over an `ArrowDataset` at https://github.com/huggingface/datasets/blob/0bb47271910c8a0b628dba157988372307fca1d2/tests/test_arrow_dataset.py#L570
**Describe the solution you'd like**
Start slowly replacing all the `assertEqual` statements with `assertTupleEqual` if the assertion is done over a Python tuple, as we're doing with the Python lists using `assertListEqual` rather than `assertEqual`.
**Additional context**
If so, please let me know and I'll try to go over the tests and create a PR if applicable, otherwise, if you consider this should stay as `assertEqual` rather than `assertSequenceEqual` feel free to close this issue! Thanks 🤗
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4419/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4419/timeline | null | completed | null | null | false | [
"Hi! If the only goal is to improve readability, it's better to use `assertTupleEqual` than `assertSequenceEqual` for Python tuples. Also, note that this function is called internally by `assertEqual`, but I guess we can accept a PR to be more verbose.",
"Hi @mariosasko, right! I'll update the issue title/desc wi... |
https://api.github.com/repos/huggingface/datasets/issues/2892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2892/comments | https://api.github.com/repos/huggingface/datasets/issues/2892/events | https://github.com/huggingface/datasets/issues/2892 | 993,274,572 | MDU6SXNzdWU5OTMyNzQ1NzI= | 2,892 | Error when encoding a dataset with None objects with a Sequence feature | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-09-10T14:11:43Z | 2021-09-13T14:18:13Z | 2021-09-13T14:17:42Z | null | There is an error when encoding a dataset with None objects with a Sequence feature
To reproduce:
```python
from datasets import Dataset, Features, Value, Sequence
data = {"a": [[0], None]}
features = Features({"a": Sequence(Value("int32"))})
dataset = Dataset.from_dict(data, features=features)
```
raises
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-24-40add67f8751> in <module>
2 data = {"a": [[0], None]}
3 features = Features({"a": Sequence(Value("int32"))})
----> 4 dataset = Dataset.from_dict(data, features=features)
[...]
~/datasets/features.py in encode_nested_example(schema, obj)
888 if isinstance(obj, str): # don't interpret a string as a list
889 raise ValueError("Got a string but expected a list instead: '{}'".format(obj))
--> 890 return [encode_nested_example(schema.feature, o) for o in obj]
891 # Object with special encoding:
892 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks
TypeError: 'NoneType' object is not iterable
```
Instead, if should run without error, as if the `features` were not passed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2892/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2892/timeline | null | completed | null | null | false | [
"This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2704/comments | https://api.github.com/repos/huggingface/datasets/issues/2704/events | https://github.com/huggingface/datasets/pull/2704 | 950,483,980 | MDExOlB1bGxSZXF1ZXN0Njk1MDIzMTEz | 2,704 | Fix pick default config name message | [] | closed | false | null | 0 | 2021-07-22T09:49:43Z | 2021-07-22T10:02:41Z | 2021-07-22T10:02:40Z | null | The error message to tell which config name to load is not displayed.
This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659
I fixed that by making the config kwargs empty by default, even if default parameters are passed
Fix https://github.com/huggingface/datasets/issues/2703 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2704/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2704/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2704.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2704",
"merged_at": "2021-07-22T10:02:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2704.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2704"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3301/comments | https://api.github.com/repos/huggingface/datasets/issues/3301/events | https://github.com/huggingface/datasets/pull/3301 | 1,058,718,957 | PR_kwDODunzps4uyA9o | 3,301 | Add wikipedia tags | [] | closed | false | null | 0 | 2021-11-19T16:39:25Z | 2021-11-19T16:49:30Z | 2021-11-19T16:49:29Z | null | Add the missing tags to the wikipedia dataset card.
I also added the missing languages code in our language codes list.
This should also fix the code snippet that is presented on the Hub to load the dataset: fix https://github.com/huggingface/datasets/issues/3292 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3301/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3301",
"merged_at": "2021-11-19T16:49:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3301"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1462 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1462/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1462/comments | https://api.github.com/repos/huggingface/datasets/issues/1462/events | https://github.com/huggingface/datasets/pull/1462 | 761,489,274 | MDExOlB1bGxSZXF1ZXN0NTM2MTQ4Njc1 | 1,462 | Added conv ai 2 (Again) | [] | closed | false | null | 6 | 2020-12-10T18:21:55Z | 2020-12-13T00:21:32Z | 2020-12-13T00:21:31Z | null | The original PR -> https://github.com/huggingface/datasets/pull/1383
Reason for creating again -
The reason I had to create the PR again was due to the master rebasing issue. After rebasing the changes, all the previous commits got added to the branch. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1462/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1462/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1462.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1462",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1462.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1462"
} | true | [
"Looking perfect to me, need to rerun the tests\r\n",
"Thanks, @tanmoyio. \r\nHow do I rerun the tests? Should I change something or push a new commit?",
"@rkc007 you don't need to rerun it, @lhoestq @yjernite will rerun it, as there are huge number of PRs in the queue it might take lil bit of time. ",
"ive j... |
https://api.github.com/repos/huggingface/datasets/issues/2422 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2422/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2422/comments | https://api.github.com/repos/huggingface/datasets/issues/2422/events | https://github.com/huggingface/datasets/pull/2422 | 905,568,548 | MDExOlB1bGxSZXF1ZXN0NjU2NjM3MzY1 | 2,422 | Fix save_to_disk nested features order in dataset_info.json | [] | closed | false | null | 0 | 2021-05-28T15:03:28Z | 2021-05-28T15:26:57Z | 2021-05-28T15:26:56Z | null | Fix issue https://github.com/huggingface/datasets/issues/2267
The order of the nested features matters (pyarrow limitation), but the save_to_disk method was saving the features types as JSON with `sort_keys=True`, which was breaking the order of the nested features. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2422/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2422/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2422.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2422",
"merged_at": "2021-05-28T15:26:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2422.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2422"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4301 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4301/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4301/comments | https://api.github.com/repos/huggingface/datasets/issues/4301/events | https://github.com/huggingface/datasets/pull/4301 | 1,230,401,256 | PR_kwDODunzps43idlE | 4,301 | Add ImageNet-Sketch dataset | [] | closed | false | null | 2 | 2022-05-09T23:38:45Z | 2022-05-23T18:14:14Z | 2022-05-23T18:05:29Z | null | This PR adds the ImageNet-Sketch dataset and resolves #3953 . | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4301/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4301/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4301.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4301",
"merged_at": "2022-05-23T18:05:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4301.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4301"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I think you can go ahead with uploading the data, and also ping the author in parallel. I think the images may subject to copyright anyway (scrapped from google image) so the dataset author is not allowed to set a license to the data... |
https://api.github.com/repos/huggingface/datasets/issues/5222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5222/comments | https://api.github.com/repos/huggingface/datasets/issues/5222/events | https://github.com/huggingface/datasets/issues/5222 | 1,442,412,507 | I_kwDODunzps5V-Xfb | 5,222 | HuggingFace website is incorrectly reporting that my datasets are pickled | [] | closed | false | null | 4 | 2022-11-09T16:41:16Z | 2022-11-09T18:10:46Z | 2022-11-09T18:06:57Z | null | ### Describe the bug
HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images.
Hopefully this is the right location to report this bug.
### Steps to reproduce the bug
Inspect my dataset respository here: https://huggingface.co/datasets/ProGamerGov/StableDiffusion-v1-5-Regularization-Images
### Expected behavior
They should not be reported as being pickled.
### Environment info
N/A | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5222/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5222/timeline | null | completed | null | null | false | [
"cc @McPatate maybe you know what's happening ?",
"Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~",
"> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that f... |
https://api.github.com/repos/huggingface/datasets/issues/6051 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6051/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6051/comments | https://api.github.com/repos/huggingface/datasets/issues/6051/events | https://github.com/huggingface/datasets/issues/6051 | 1,811,549,650 | I_kwDODunzps5r-g3S | 6,051 | Skipping shard in the remote repo and resume upload | [] | closed | false | null | 2 | 2023-07-19T09:25:26Z | 2023-07-20T18:16:01Z | 2023-07-20T18:16:00Z | null | ### Describe the bug
For some reason when I try to resume the upload of my dataset, it is very slow to reach the index of the shard from which to resume the uploading.
From my understanding, the problem is in this part of the code:
arrow_dataset.py
```python
for index, shard in logging.tqdm(
enumerate(itertools.chain([first_shard], shards_iter)),
desc="Pushing dataset shards to the dataset hub",
total=num_shards,
disable=not logging.is_progress_bar_enabled(),
):
shard_path_in_repo = path_in_repo(index, shard)
# Upload a shard only if it doesn't already exist in the repository
if shard_path_in_repo not in data_files:
```
In particular, iterating the generator is slow during the call:
```python
self._select_contiguous(start, length, new_fingerprint=new_fingerprint)
```
I wonder if it is possible to avoid calling this function for shards that are already uploaded and just start from the correct shard index.
### Steps to reproduce the bug
1. Start the upload
```python
dataset = load_dataset("imagefolder", data_dir=DATA_DIR, split="train", drop_labels=True)
dataset.push_to_hub("repo/name")
```
2. Stop and restart the upload after hundreds of shards
### Expected behavior
Skip the uploaded shards faster.
### Environment info
- `datasets` version: 2.5.1
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.16
- PyArrow version: 12.0.1
- Pandas version: 2.0.2
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6051/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6051/timeline | null | completed | null | null | false | [
"Hi! `_select_contiguous` fetches a (zero-copy) slice of the dataset's Arrow table to build a shard, so I don't think this part is the problem. To me, the issue seems to be the step where we embed external image files' bytes (a lot of file reads). You can use `.map` with multiprocessing to perform this step before ... |
https://api.github.com/repos/huggingface/datasets/issues/3128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3128/comments | https://api.github.com/repos/huggingface/datasets/issues/3128/events | https://github.com/huggingface/datasets/issues/3128 | 1,032,201,870 | I_kwDODunzps49hiaO | 3,128 | Support Audio feature for TAR archives in sequential access | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2021-10-21T08:23:01Z | 2021-11-17T17:42:07Z | 2021-11-17T17:42:07Z | null | Currently, Audio feature accesses each audio file by their file path.
However, streamed TAR archive files do not allow random access to their archived files.
Therefore, we should enhance the Audio feature to support TAR archived files in sequential access. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3128/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3128/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5790/comments | https://api.github.com/repos/huggingface/datasets/issues/5790/events | https://github.com/huggingface/datasets/pull/5790 | 1,683,229,126 | PR_kwDODunzps5PG0mJ | 5,790 | Allow to run CI on push to ci-branch | [] | closed | false | null | 2 | 2023-04-25T13:57:26Z | 2023-04-26T13:43:08Z | 2023-04-26T13:35:47Z | null | This PR allows to run the CI on push to a branch named "ci-*", without needing to open a PR.
- This will allow to make CI tests without opening a PR, e.g., for future `huggingface-hub` releases, future dependency releases (like `fsspec`, `pandas`,...)
Note that to build the documentation, we already allow it on push to a branch named "doc-builder*".
See:
- #5788
CC: @Wauplin | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5790/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5790",
"merged_at": "2023-04-26T13:35:47Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5790"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2440 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2440/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2440/comments | https://api.github.com/repos/huggingface/datasets/issues/2440/events | https://github.com/huggingface/datasets/issues/2440 | 908,521,954 | MDU6SXNzdWU5MDg1MjE5NTQ= | 2,440 | Remove `extended` field from dataset tagger | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-06-01T17:18:42Z | 2021-06-09T09:06:31Z | 2021-06-09T09:06:30Z | null | ## Describe the bug
While working on #2435 I used the [dataset tagger](https://huggingface.co/datasets/tagging/) to generate the missing tags for the YAML metadata of each README.md file. However, it seems that our CI raises an error when the `extended` field is included:
```
dataset_name = 'arcd'
@pytest.mark.parametrize("dataset_name", get_changed_datasets(repo_path))
def test_changed_dataset_card(dataset_name):
card_path = repo_path / "datasets" / dataset_name / "README.md"
assert card_path.exists()
error_messages = []
try:
ReadMe.from_readme(card_path)
except Exception as readme_error:
error_messages.append(f"The following issues have been found in the dataset cards:\nREADME:\n{readme_error}")
try:
DatasetMetadata.from_readme(card_path)
except Exception as metadata_error:
error_messages.append(
f"The following issues have been found in the dataset cards:\nYAML tags:\n{metadata_error}"
)
if error_messages:
> raise ValueError("\n".join(error_messages))
E ValueError: The following issues have been found in the dataset cards:
E YAML tags:
E __init__() got an unexpected keyword argument 'extended'
tests/test_dataset_cards.py:70: ValueError
```
Consider either removing this tag from the tagger or including it as part of the validation step in the CI.
cc @yjernite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2440/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2440/timeline | null | completed | null | null | false | [
"The tagger also doesn't insert the value for the `size_categories` field automatically, so this should be fixed too",
"Thanks for reporting. Indeed the `extended` tag doesn't exist. Not sure why we had that in the tagger.\r\nThe repo of the tagger is here if someone wants to give this a try: https://github.com/h... |
https://api.github.com/repos/huggingface/datasets/issues/2936 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2936/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2936/comments | https://api.github.com/repos/huggingface/datasets/issues/2936/events | https://github.com/huggingface/datasets/pull/2936 | 999,521,647 | PR_kwDODunzps4r5knb | 2,936 | Check that array is not Float as nan != nan | [] | closed | false | null | 0 | 2021-09-17T16:16:41Z | 2021-09-21T09:39:05Z | 2021-09-21T09:39:04Z | null | The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan.
Pass on FloatArrays as we should not raise an Exception for them. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2936/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2936/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2936.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2936",
"merged_at": "2021-09-21T09:39:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2936.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2936"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5938 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5938/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5938/comments | https://api.github.com/repos/huggingface/datasets/issues/5938/events | https://github.com/huggingface/datasets/pull/5938 | 1,749,462,851 | PR_kwDODunzps5SmbkI | 5,938 | Make get_from_cache use custom temp filename that is locked | [] | closed | false | null | 2 | 2023-06-09T09:01:13Z | 2023-06-14T13:35:38Z | 2023-06-14T13:27:24Z | null | This PR ensures that the temporary filename created is the same as the one that is locked, while writing to the cache.
This PR stops using `tempfile` to generate the temporary filename.
Additionally, the behavior now is aligned for both `resume_download` `True` and `False`.
Refactor temp_file_manager so that it uses the filename that is locked:
- Use: `cache_path + ".incomplete"`, when the locked one is `cache_path + ".lock"`
Before it was using `tempfile` inside `cache_dir`, which was not locked: although very improbable name collision (8 random characters), this was not impossible when huge number of multiple processes.
Maybe related to "Stale file handle" issues caused by `tempfile`:
- [ ] https://huggingface.co/datasets/tapaco/discussions/4
- [ ] https://huggingface.co/datasets/xcsr/discussions/1
- [ ] https://huggingface.co/datasets/covost2/discussions/3
```
Error code: ConfigNamesError
Exception: OSError
Message: [Errno 116] Stale file handle
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 61, in compute_config_names_response
for config in sorted(get_dataset_config_names(path=dataset, use_auth_token=use_auth_token))
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 323, in get_dataset_config_names
dataset_module = dataset_module_factory(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1219, in dataset_module_factory
raise e1 from None
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1188, in dataset_module_factory
return HubDatasetModuleFactoryWithScript(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 907, in get_module
dataset_readme_path = self.download_dataset_readme_file()
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 896, in download_dataset_readme_file
return cached_path(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 183, in cached_path
output_path = get_from_cache(
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 611, in get_from_cache
http_get(
File "/usr/local/lib/python3.9/tempfile.py", line 496, in __exit__
result = self.file.__exit__(exc, value, tb)
OSError: [Errno 116] Stale file handle
```
- the stale file handle error can be raised when `tempfile` tries to close (when exiting its context manager) a filename that has been already closed by other process
- note that `tempfile` filenames are randomly generated but not locked in our code
CC: @severo | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5938/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5938/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5938.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5938",
"merged_at": "2023-06-14T13:27:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5938.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5938"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3494/comments | https://api.github.com/repos/huggingface/datasets/issues/3494/events | https://github.com/huggingface/datasets/pull/3494 | 1,089,983,103 | PR_kwDODunzps4wV0vB | 3,494 | Clone full repo to detect new tags when mirroring datasets on the Hub | [] | closed | false | null | 2 | 2021-12-28T15:50:47Z | 2021-12-28T16:07:21Z | 2021-12-28T16:07:20Z | null | The new releases of `datasets` were not detected because the shallow clone in the CI wasn't getting the git tags.
By cloning the full repository we can properly detect a new release, and tag all the dataset repositories accordingly
cc @SBrandeis | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 1,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3494/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3494/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3494.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3494",
"merged_at": "2021-12-28T16:07:20Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3494.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3494"
} | true | [
"Good catch !!",
"The CI fail is unrelated to this PR and fixed on master, merging :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/2164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2164/comments | https://api.github.com/repos/huggingface/datasets/issues/2164/events | https://github.com/huggingface/datasets/pull/2164 | 849,739,759 | MDExOlB1bGxSZXF1ZXN0NjA4NDQ0MTE3 | 2,164 | Replace assertTrue(isinstance with assertIsInstance in tests | [] | closed | false | null | 0 | 2021-04-03T21:07:02Z | 2021-04-06T14:41:09Z | 2021-04-06T14:41:08Z | null | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2164/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2164/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"merged_at": "2021-04-06T14:41:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2164"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/456 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/456/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/456/comments | https://api.github.com/repos/huggingface/datasets/issues/456/events | https://github.com/huggingface/datasets/pull/456 | 668,723,785 | MDExOlB1bGxSZXF1ZXN0NDU5MTc1MTY0 | 456 | add crd3(ACL 2020) dataset | [] | closed | false | null | 0 | 2020-07-30T13:28:35Z | 2020-08-03T11:28:52Z | 2020-08-03T11:28:52Z | null | This PR adds the **Critical Role Dungeons and Dragons Dataset** published at ACL 2020 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/456/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/456/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/456.diff",
"html_url": "https://github.com/huggingface/datasets/pull/456",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/456.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/456"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4119 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4119/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4119/comments | https://api.github.com/repos/huggingface/datasets/issues/4119/events | https://github.com/huggingface/datasets/pull/4119 | 1,195,641,298 | PR_kwDODunzps41yXHF | 4,119 | Hotfix failing CI tests on Windows | [] | closed | false | null | 1 | 2022-04-07T07:38:46Z | 2022-04-07T09:47:24Z | 2022-04-07T07:57:13Z | null | This PR makes a hotfix for our CI Windows tests: https://app.circleci.com/pipelines/github/huggingface/datasets/11092/workflows/9cfdb1dd-0fec-4fe0-8122-5f533192ebdc/jobs/67414
Fix #4118
I guess this issue is related to this PR:
- huggingface/huggingface_hub#815 | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4119/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4119/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4119.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4119",
"merged_at": "2022-04-07T07:57:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4119.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4119"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5159/comments | https://api.github.com/repos/huggingface/datasets/issues/5159/events | https://github.com/huggingface/datasets/pull/5159 | 1,422,172,080 | PR_kwDODunzps5BfBN9 | 5,159 | fsspec lock reset in multiprocessing | [] | closed | false | null | 1 | 2022-10-25T09:41:59Z | 2022-11-03T20:51:15Z | 2022-11-03T20:48:53Z | null | `fsspec` added a clean way of resetting its lock - instead of doing it manually | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5159/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5159/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5159.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5159",
"merged_at": "2022-11-03T20:48:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5159.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5159"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/473/comments | https://api.github.com/repos/huggingface/datasets/issues/473/events | https://github.com/huggingface/datasets/pull/473 | 672,007,247 | MDExOlB1bGxSZXF1ZXN0NDYyMTIwNzU4 | 473 | add DoQA dataset (ACL 2020) | [] | closed | false | null | 0 | 2020-08-03T11:26:52Z | 2020-09-10T17:19:11Z | 2020-09-03T11:44:15Z | null | add DoQA dataset (ACL 2020) http://ixa.eus/node/12931 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/473/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/473",
"merged_at": "2020-09-03T11:44:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/473"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/6080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6080/comments | https://api.github.com/repos/huggingface/datasets/issues/6080/events | https://github.com/huggingface/datasets/pull/6080 | 1,822,667,554 | PR_kwDODunzps5WdL4K | 6,080 | Remove README link to deprecated Colab notebook | [] | closed | false | null | 3 | 2023-07-26T15:27:49Z | 2023-07-26T16:24:43Z | 2023-07-26T16:14:34Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6080/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6080.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6080",
"merged_at": "2023-07-26T16:14:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6080.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6080"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/1424 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1424/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1424/comments | https://api.github.com/repos/huggingface/datasets/issues/1424/events | https://github.com/huggingface/datasets/pull/1424 | 760,724,914 | MDExOlB1bGxSZXF1ZXN0NTM1NTA4MjY5 | 1,424 | Add yoruba wordsim353 | [] | closed | false | null | 0 | 2020-12-09T22:37:42Z | 2020-12-09T22:39:45Z | 2020-12-09T22:39:45Z | null | Added WordSim-353 evaluation dataset for Yoruba | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1424/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1424/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1424.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1424",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1424.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1424"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2228/comments | https://api.github.com/repos/huggingface/datasets/issues/2228/events | https://github.com/huggingface/datasets/pull/2228 | 859,795,563 | MDExOlB1bGxSZXF1ZXN0NjE2ODE2MTQz | 2,228 | [WIP] Add ArrayXD support for fixed size list. | [] | open | false | null | 1 | 2021-04-16T13:04:08Z | 2022-07-06T15:19:48Z | null | null | Add support for fixed size list for ArrayXD when shape is known . See https://github.com/huggingface/datasets/issues/2146
Since offset are not stored anymore, the file size is now roughly equal to the actual data size. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2228/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2228/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2228.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2228",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2228.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2228"
} | true | [
"Awesome thanks ! To fix the CI you just need to merge master into your branch.\r\nThe error is unrelated to your PR"
] |
https://api.github.com/repos/huggingface/datasets/issues/4448 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4448/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4448/comments | https://api.github.com/repos/huggingface/datasets/issues/4448/events | https://github.com/huggingface/datasets/issues/4448 | 1,260,966,129 | I_kwDODunzps5LKNDx | 4,448 | New Preprocessing Feature - Deduplication [Request] | [
{
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists",
"id": 1935892865,
"name": "duplicate",
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate"
},
{
"color": "a2eeef",
... | open | false | null | 2 | 2022-06-05T05:32:56Z | 2023-03-08T17:38:37Z | null | null | **Is your feature request related to a problem? Please describe.**
Many large datasets are full of duplications and it has been shown that deduplicating datasets can lead to better performance while training, and more truthful evaluation at test-time.
A feature that allows one to easily deduplicate a dataset can be cool!
**Describe the solution you'd like**
We can define a function and keep only the first/last data-point that yields the value according to this function.
**Describe alternatives you've considered**
The clear alternative is to repeat a clear boilerplate every time someone want to deduplicate a dataset.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4448/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4448/timeline | null | null | null | null | false | [
"Hi! The [datasets_sql](https://github.com/mariosasko/datasets_sql) package lets you easily find distinct rows in a dataset (an example with `SELECT DISTINCT` is in the readme). Deduplication is (still) not part of the official API because it's hard to implement for datasets bigger than RAM while only using the nat... |
https://api.github.com/repos/huggingface/datasets/issues/2530 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2530/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2530/comments | https://api.github.com/repos/huggingface/datasets/issues/2530/events | https://github.com/huggingface/datasets/pull/2530 | 927,013,773 | MDExOlB1bGxSZXF1ZXN0Njc1MjMyNDk0 | 2,530 | Fixed label parsing in the ProductReviews dataset | [] | closed | false | null | 4 | 2021-06-22T09:12:45Z | 2021-06-22T12:55:20Z | 2021-06-22T12:52:40Z | null | Fixed issue with parsing dataset labels. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2530/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2530/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2530.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2530",
"merged_at": "2021-06-22T12:52:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2530.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2530"
} | true | [
"@lhoestq, can you please review this PR?\r\nWhat exactly is the problem in the test case? Should it matter?",
"Hi ! Thanks for fixing this :)\r\n\r\nThe CI fails for two reasons:\r\n- the `pretty_name` tag is missing in yaml tags in ./datasets/turkish_product_reviews/README.md. You can fix that by adding this in... |
https://api.github.com/repos/huggingface/datasets/issues/447 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/447/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/447/comments | https://api.github.com/repos/huggingface/datasets/issues/447/events | https://github.com/huggingface/datasets/pull/447 | 666,842,115 | MDExOlB1bGxSZXF1ZXN0NDU3NjE2NDA0 | 447 | [BugFix] fix wrong import of DEFAULT_TOKENIZER | [] | closed | false | null | 0 | 2020-07-28T07:41:10Z | 2020-07-28T12:58:01Z | 2020-07-28T12:52:05Z | null | Fixed the path to `DEFAULT_TOKENIZER`
#445 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/447/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/447/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/447.diff",
"html_url": "https://github.com/huggingface/datasets/pull/447",
"merged_at": "2020-07-28T12:52:05Z",
"patch_url": "https://github.com/huggingface/datasets/pull/447.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/447"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2600 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2600/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2600/comments | https://api.github.com/repos/huggingface/datasets/issues/2600/events | https://github.com/huggingface/datasets/issues/2600 | 938,086,745 | MDU6SXNzdWU5MzgwODY3NDU= | 2,600 | Crash when using multiprocessing (`num_proc` > 1) on `filter` and all samples are discarded | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 0 | 2021-07-06T16:53:25Z | 2021-07-07T12:50:31Z | 2021-07-07T12:50:31Z | null | ## Describe the bug
If `filter` is applied to a dataset using multiprocessing (`num_proc` > 1) and all sharded datasets are empty afterwards (due to all samples being discarded), the program crashes.
## Steps to reproduce the bug
```python
from datasets import Dataset
data = Dataset.from_dict({'id': [0,1]})
data.filter(lambda x: False, num_proc=2)
```
## Expected results
An empty table should be returned without crashing.
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 185, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/user/venv/lib/python3.8/site-packages/datasets/fingerprint.py", line 397, in wrapper
out = func(self, *args, **kwargs)
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 2143, in filter
return self.map(
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1738, in map
result = concatenate_datasets(transformed_shards)
File "/home/user/venv/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 3267, in concatenate_datasets
table = concat_tables(tables_to_concat, axis=axis)
File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 853, in concat_tables
return ConcatenationTable.from_tables(tables, axis=axis)
File "/home/user/venv/lib/python3.8/site-packages/datasets/table.py", line 713, in from_tables
blocks = to_blocks(tables[0])
IndexError: list index out of range
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.9.0
- Platform: Linux-5.12.11-300.fc34.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.10
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2600/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2600/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3674/comments | https://api.github.com/repos/huggingface/datasets/issues/3674/events | https://github.com/huggingface/datasets/pull/3674 | 1,123,027,874 | PR_kwDODunzps4yBe17 | 3,674 | Add FrugalScore metric | [] | closed | false | null | 5 | 2022-02-03T12:28:52Z | 2022-02-21T15:58:44Z | 2022-02-21T15:58:44Z | null | This pull request add FrugalScore metric for NLG systems evaluation.
FrugalScore is a reference-based metric for NLG models evaluation. It is based on a distillation approach that allows to learn a fixed, low cost version of any expensive NLG metric, while retaining most of its original performance.
Paper: https://arxiv.org/abs/2110.08559?context=cs
Github: https://github.com/moussaKam/FrugalScore
@lhoestq | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3674/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3674.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3674",
"merged_at": "2022-02-21T15:58:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3674.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3674"
} | true | [
"@lhoestq \r\n\r\nThe model used by default (`moussaKam/frugalscore_tiny_bert-base_bert-score`) is a tiny model.\r\n\r\nI still want to make one modification before merging.\r\nI would like to load the model checkpoint once. Do you think it's a good idea if I load it in `_download_and_prepare`? In this case should ... |
https://api.github.com/repos/huggingface/datasets/issues/1674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1674/comments | https://api.github.com/repos/huggingface/datasets/issues/1674/events | https://github.com/huggingface/datasets/issues/1674 | 777,321,840 | MDU6SXNzdWU3NzczMjE4NDA= | 1,674 | dutch_social can't be loaded | [] | closed | false | null | 8 | 2021-01-01T17:37:08Z | 2022-10-05T13:03:26Z | 2022-10-05T13:03:26Z | null | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koenvandenberge$ python
Python 3.7.4 (default, Aug 13 2019, 15:17:50)
[Clang 4.0.1 (tags/RELEASE_401/final)] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
dataset = load_dataset(
'dutch_social')
>>> dataset = load_dataset(
... 'dutch_social')
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/Users/koenvandenberge/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module
combined_path, github_file_path, file_path
FileNotFoundError: Couldn't find file locally at dutch_social/dutch_social.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/dutch_social/dutch_social.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/dutch_social/dutch_social.py
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1674/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1674/timeline | null | completed | null | null | false | [
"exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n",
"Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the... |
https://api.github.com/repos/huggingface/datasets/issues/4572 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4572/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4572/comments | https://api.github.com/repos/huggingface/datasets/issues/4572/events | https://github.com/huggingface/datasets/issues/4572 | 1,285,022,499 | I_kwDODunzps5Ml-Mj | 4,572 | Dataset Viewer issue for mlsum | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-06-26T20:24:17Z | 2022-07-21T12:40:01Z | 2022-07-21T12:40:01Z | null | ### Link
https://huggingface.co/datasets/mlsum/viewer/de/train
### Description
There's seems to be a problem with the download / streaming of this dataset:
```
Server error
Status code: 400
Exception: BadZipFile
Message: File is not a zip file
```
### Owner
No | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4572/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4572/timeline | null | completed | null | null | false | [
"Thanks for reporting, @lewtun.\r\n\r\nAfter investigation, it seems that the server https://gitlab.lip6.fr does not allow HTTP Range requests.\r\n\r\nWe are trying to find a workaround..."
] |
https://api.github.com/repos/huggingface/datasets/issues/2698 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2698/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2698/comments | https://api.github.com/repos/huggingface/datasets/issues/2698/events | https://github.com/huggingface/datasets/pull/2698 | 950,159,867 | MDExOlB1bGxSZXF1ZXN0Njk0NzUxMzMw | 2,698 | Ignore empty batch when writing | [] | closed | false | null | 0 | 2021-07-21T22:35:30Z | 2021-07-26T14:56:03Z | 2021-07-26T13:25:26Z | null | This prevents an schema update with unknown column types, as reported in #2644.
This is my first attempt at fixing the issue. I tested the following:
- First batch returned by a batched map operation is empty.
- An intermediate batch is empty.
- `python -m unittest tests.test_arrow_writer` passes.
However, `arrow_writer` looks like a pretty generic interface, I'm not sure if there are other uses I may have overlooked. Let me know if that's the case, or if a better approach would be preferable. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2698/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2698/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2698.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2698",
"merged_at": "2021-07-26T13:25:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2698.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2698"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1828 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1828/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1828/comments | https://api.github.com/repos/huggingface/datasets/issues/1828/events | https://github.com/huggingface/datasets/pull/1828 | 802,449,234 | MDExOlB1bGxSZXF1ZXN0NTY4NTkwNDM2 | 1,828 | Add CelebA Dataset | [] | closed | false | null | 9 | 2021-02-05T20:20:55Z | 2021-02-18T14:17:07Z | 2021-02-18T14:17:07Z | null | Trying to add CelebA Dataset.
Need help with testing. Loading examples takes a lot of time so I am unable to generate the `dataset_infos.json` and unable to test. Also, need help with creating `dummy_data.zip`.
Additionally, trying to load a few examples using `load_dataset('./datasets/celeb_a',split='train[10:20]')` still loads all the examples (doesn't stop at 10). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1828/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1828/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/1828.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1828",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1828.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1828"
} | true | [
"Hi @gchhablani! Thanks for all the contributions! We definitely want more image datasets, but Face datasets are tricky in general, in this one includes predicting attributes such as Attractiveness, Gender, or Race, which can be pretty problematic.\r\n\r\nWould you be up for starting with only object classification... |
https://api.github.com/repos/huggingface/datasets/issues/2886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2886/comments | https://api.github.com/repos/huggingface/datasets/issues/2886/events | https://github.com/huggingface/datasets/issues/2886 | 992,534,632 | MDU6SXNzdWU5OTI1MzQ2MzI= | 2,886 | Hj | [] | closed | false | null | 0 | 2021-09-09T18:58:52Z | 2021-09-10T11:46:29Z | 2021-09-10T11:46:29Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2886/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2886/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1223/comments | https://api.github.com/repos/huggingface/datasets/issues/1223/events | https://github.com/huggingface/datasets/pull/1223 | 758,022,208 | MDExOlB1bGxSZXF1ZXN0NTMzMjY2MDc4 | 1,223 | 🇸🇪 Added Swedish Reviews dataset for sentiment classification in Sw… | [] | closed | false | null | 0 | 2020-12-06T21:02:54Z | 2020-12-08T10:54:56Z | 2020-12-08T10:54:56Z | null | perhaps: @lhoestq 🤗 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1223/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1223/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1223.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1223",
"merged_at": "2020-12-08T10:54:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1223.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1223"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3505/comments | https://api.github.com/repos/huggingface/datasets/issues/3505/events | https://github.com/huggingface/datasets/issues/3505 | 1,091,150,820 | I_kwDODunzps5BCaPk | 3,505 | cast_column function not working with map function in streaming mode for Audio features | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-12-30T14:52:01Z | 2022-01-18T19:54:07Z | 2022-01-18T19:54:07Z | null | ## Describe the bug
I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only.
I am getting features of 'audio' of string type with load_dataset call. After using cast_column 'audio' feature is converted into 'Audio' type. But in map function I am not able to get Audio type for audio feature & getting string type data containing path of file only. So I am not able to use processor in encode function.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import load_dataset, Audio
from transformers import Wav2Vec2Processor
def encode(batch, processor):
print("Audio: ",batch['audio'])
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
return batch
def print_ds(ds):
iterator = iter(ds)
for d in iterator:
print("Data: ",d)
break
processor = Wav2Vec2Processor.from_pretrained(pretrained_model_path)
dataset = load_dataset("custom_dataset.py","train",data_files={'train':'train_path.txt'},
data_dir="data", streaming=True, split="train")
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.cast_column("audio", Audio(sampling_rate=16_000))
print("Features: ",dataset.features)
print_ds(dataset)
dataset = dataset.map(lambda x: encode(x,processor))
print("Features: ",dataset.features)
print_ds(dataset)
```
## Expected results
map function not printing Audio type features be used with processor function and getting error in processor call due to this.
## Actual results
# after load_dataset call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Value(dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': 'data/0116_003.wav'}
# after cast_column call
Features: {'sentence': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None)}
Data: {'sentence': 'और अपने पेट को माँ की स्वादिष्ट गरमगरम जलेबियाँ हड़पते\n', 'audio': {'path': 'data/0116_003.wav', 'array': array([ 1.2662281e-06, 1.0264218e-06, -1.3615092e-06, ...,
1.3017889e-02, 1.0085563e-02, 4.8155054e-03], dtype=float32), 'sampling_rate': 16000}}
# after map call
Features: None
Audio: data/0116_003.wav
Traceback (most recent call last):
File "demo2.py", line 36, in <module>
print_ds(dataset)
File "demo2.py", line 11, in print_ds
for d in iterator:
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 341, in __iter__
for key, example in self._iter():
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 338, in _iter
yield from ex_iterable
File "/opt/conda/lib/python3.7/site-packages/datasets/iterable_dataset.py", line 192, in __iter__
yield key, self.function(example)
File "demo2.py", line 32, in <lambda>
dataset = dataset.map(lambda x: batch_encode(x,processor))
File "demo2.py", line 6, in batch_encode
batch["input_values"] = processor(batch["audio"]['array'], sampling_rate=16000).input_values
TypeError: string indices must be integers
## Environment info
- `datasets` version: 1.17.0
- Platform: Linux-4.14.243 with-debian-bullseye-sid
- Python version: 3.7.9
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3505/timeline | null | completed | null | null | false | [
"Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function)."
] |
https://api.github.com/repos/huggingface/datasets/issues/3519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3519/comments | https://api.github.com/repos/huggingface/datasets/issues/3519/events | https://github.com/huggingface/datasets/pull/3519 | 1,093,655,205 | PR_kwDODunzps4whnXH | 3,519 | CC100: Using HTTPS for the data source URL fixes load_dataset() | [] | closed | false | null | 0 | 2022-01-04T18:45:54Z | 2022-01-05T17:28:34Z | 2022-01-05T17:28:34Z | null | Without this change the following script (with any lang parameter) consistently fails. After changing to the HTTPS URL, the script works as expected.
```python
from datasets import load_dataset
dataset = load_dataset("cc100", lang="en")
```
This is the error produced by the previous script:
```sh
Using custom data configuration en-lang=en
Downloading and preparing dataset cc100/en to /home/antti/.cache/huggingface/datasets/cc100/en-lang=en/0.0.0/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b...
Traceback (most recent call last):
File "/home/antti/tmp/cc100/cc100.py", line 3, in <module>
dataset = load_dataset("cc100", lang="en")
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/load.py", line 1694, in load_dataset
builder_instance.download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 595, in download_and_prepare
self._download_and_prepare(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/builder.py", line 661, in _download_and_prepare
split_generators = self._split_generators(dl_manager, **split_generators_kwargs)
File "/home/antti/.cache/huggingface/modules/datasets_modules/datasets/cc100/526ac20780de5e074cf73a7466e868cb67f960b48f6de42ff6a6c4e71910d71b/cc100.py", line 117, in _split_generators
path = dl_manager.download_and_extract(download_url)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 308, in download_and_extract
return self.extract(self.download(url_or_urls))
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 196, in download
downloaded_path_or_paths = map_nested(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/py_utils.py", line 251, in map_nested
return function(data_struct)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/download_manager.py", line 217, in _download
return cached_path(url_or_filename, download_config=download_config)
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 298, in cached_path
output_path = get_from_cache(
File "/home/antti/tmp/cc100/venv/lib/python3.9/site-packages/datasets/utils/file_utils.py", line 617, in get_from_cache
raise ConnectionError(f"Couldn't reach {url} (error {response.status_code})")
ConnectionError: Couldn't reach http://data.statmt.org/cc-100/en.txt.xz (error 503)
```
Note that I get the same behavior also using curl on the command line. The plain HTTP "curl -L http://data.statmt.org/cc-100/en.txt.xz" fails with "503 Service unavailable", but the with the HTTPS version of the URL curl starts downloading the file.
My guess is that the server does overly aggressive rate-limitting. When a client requests an HTTP URL, it (sensibly) gets redirected to the HTTPS equivalent, but now the server notices two requests coming from the same client (the original HTTP and the redirected HTTPS) during a brief time windows and rate-limitter kicks in and blocks the second request! If the client initally uses the HTTPS URL there's only one incoming request which the rate-limitter allows. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3519/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3519/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3519.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3519",
"merged_at": "2022-01-05T17:28:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3519.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3519"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4826 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4826/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4826/comments | https://api.github.com/repos/huggingface/datasets/issues/4826/events | https://github.com/huggingface/datasets/pull/4826 | 1,335,987,583 | PR_kwDODunzps49B0V3 | 4,826 | Fix language tags in dataset cards | [] | closed | false | null | 2 | 2022-08-11T13:47:14Z | 2022-08-11T14:17:48Z | 2022-08-11T14:03:12Z | null | Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4826/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4826/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4826.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4826",
"merged_at": "2022-08-11T14:03:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4826.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4826"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests are caused by other missing information in the dataset cards."
] |
https://api.github.com/repos/huggingface/datasets/issues/3020 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3020/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3020/comments | https://api.github.com/repos/huggingface/datasets/issues/3020/events | https://github.com/huggingface/datasets/pull/3020 | 1,015,406,105 | PR_kwDODunzps4sprfa | 3,020 | Add a metric for the MATH dataset (competition_math). | [] | closed | false | null | 4 | 2021-10-04T16:52:16Z | 2021-10-22T10:29:31Z | 2021-10-22T10:29:31Z | null | This metric computes accuracy for the MATH dataset (https://arxiv.org/abs/2103.03874) after canonicalizing the prediction and the reference (e.g., converting "1/2" to "\\\\frac{1}{2}"). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3020/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3020/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3020.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3020",
"merged_at": "2021-10-22T10:29:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3020.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3020"
} | true | [
"I believe the only failed test related to this PR is tests/test_metric_common.py::LocalMetricTest::test_load_metric_competition_math. It gives the following error:\r\n\r\nImportError: To be able to use this dataset, you need to install the following dependencies['math_equivalence'] using 'pip install git+https://g... |
https://api.github.com/repos/huggingface/datasets/issues/2100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2100/comments | https://api.github.com/repos/huggingface/datasets/issues/2100/events | https://github.com/huggingface/datasets/pull/2100 | 838,574,631 | MDExOlB1bGxSZXF1ZXN0NTk4NzMzOTM0 | 2,100 | Fix deprecated warning message and docstring | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 3 | 2021-03-23T10:27:52Z | 2021-03-24T08:19:41Z | 2021-03-23T18:03:49Z | null | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2100/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2100.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2100",
"merged_at": "2021-03-23T18:03:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2100.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2100"
} | true | [
"I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.",
"`dictionary_encode_column_ ` should be deprecated since it never work... |
https://api.github.com/repos/huggingface/datasets/issues/567 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/567/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/567/comments | https://api.github.com/repos/huggingface/datasets/issues/567/events | https://github.com/huggingface/datasets/pull/567 | 691,430,245 | MDExOlB1bGxSZXF1ZXN0NDc4MTc2Njgx | 567 | Fix BLEURT metrics for backward compatibility | [] | closed | false | null | 0 | 2020-09-02T21:22:35Z | 2020-09-03T07:29:52Z | 2020-09-03T07:29:50Z | null | Fix #565 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/567/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/567/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/567.diff",
"html_url": "https://github.com/huggingface/datasets/pull/567",
"merged_at": "2020-09-03T07:29:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/567.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/567"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1259/comments | https://api.github.com/repos/huggingface/datasets/issues/1259/events | https://github.com/huggingface/datasets/pull/1259 | 758,565,320 | MDExOlB1bGxSZXF1ZXN0NTMzNzE4NjMz | 1,259 | Add KorQPair dataset | [] | closed | false | null | 2 | 2020-12-07T14:33:57Z | 2021-12-29T00:49:40Z | 2020-12-08T15:11:41Z | null | This PR adds a [Korean paired question dataset](https://github.com/songys/Question_pair) containing labels indicating whether two questions in a given pair are semantically identical. This dataset was used to evaluate the performance of [KoGPT2](https://github.com/SKT-AI/KoGPT2#subtask-evaluations) on a phrase detection downstream task. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1259/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1259",
"merged_at": "2020-12-08T15:11:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1259"
} | true | [
"dummy data is missing",
"Hey @cceyda, thanks for pointing that out. I thought I'd added it, but seems like that wasn't the case. Just pushed a new commit with the dummy data."
] |
https://api.github.com/repos/huggingface/datasets/issues/1624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1624/comments | https://api.github.com/repos/huggingface/datasets/issues/1624/events | https://github.com/huggingface/datasets/issues/1624 | 773,669,700 | MDU6SXNzdWU3NzM2Njk3MDA= | 1,624 | Cannot download ade_corpus_v2 | [] | closed | false | null | 2 | 2020-12-23T10:58:14Z | 2021-08-03T05:08:54Z | 2021-08-03T05:08:54Z | null | I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 278, in prepare_module
local_path = cached_path(file_path, download_config=download_config)
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path
use_etag=download_config.use_etag,
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 486, in get_from_cache
raise FileNotFoundError("Couldn't find file at {}".format(url))
FileNotFoundError: Couldn't find file at https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset
path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 282, in prepare_module
combined_path, github_file_path, file_path
FileNotFoundError: Couldn't find file locally at ade_corpus_v2/ade_corpus_v2.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/ade_corpus_v2/ade_corpus_v2.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/ade_corpus_v2/ade_corpus_v2.py`
| {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1624/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1624/timeline | null | completed | null | null | false | [
"Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+htt... |
https://api.github.com/repos/huggingface/datasets/issues/580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/580/comments | https://api.github.com/repos/huggingface/datasets/issues/580/events | https://github.com/huggingface/datasets/issues/580 | 694,954,551 | MDU6SXNzdWU2OTQ5NTQ1NTE= | 580 | nlp re-creates already-there caches when using a script, but not within a shell | [] | closed | false | null | 2 | 2020-09-07T10:23:50Z | 2020-09-07T15:19:09Z | 2020-09-07T14:26:41Z | null | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 1)
```
twice. If launched from a `file.py` script, the cache will be re-created the second time. If launched as 3 shell/`ipython` commands, `nlp` will correctly re-use the cache.
As observed with @lhoestq. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/580/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/580/timeline | null | completed | null | null | false | [
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2868/comments | https://api.github.com/repos/huggingface/datasets/issues/2868/events | https://github.com/huggingface/datasets/issues/2868 | 987,139,146 | MDU6SXNzdWU5ODcxMzkxNDY= | 2,868 | Add Common Objects in 3D (CO3D) | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | 0 | 2021-09-02T20:36:12Z | 2021-12-08T12:02:10Z | null | null | ## Adding a Dataset
- **Name:** *Common Objects in 3D (CO3D)*
- **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)*
- **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)*
- **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-downloads/)*
- **Motivation:** *excerpt from above blog post:*
> As the first data set of its kind, CO3D will aptly enable reconstruction of real-life 3D objects. Indeed, CO3D already provides training data to enable our NeRFormer to tackle the new-view synthesis (NVS) task. Here, photorealistic NVS is a major step on the path to fully immersive AR/VR effects, where objects can be virtually transported across different environments, which will allow connecting users by sharing or recollecting their experiences.
>
> Besides practical applications in AR/VR, we hope that the data set will become a standard testbed for the recent proliferation of methods (including NeRFormer, Implicit Differentiable Renderer, NeRF, and others) that reconstruct 3D scenes by means of an implicit shape model.
>
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2868/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2868/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2144/comments | https://api.github.com/repos/huggingface/datasets/issues/2144/events | https://github.com/huggingface/datasets/issues/2144 | 844,352,067 | MDU6SXNzdWU4NDQzNTIwNjc= | 2,144 | Loading wikipedia 20200501.en throws pyarrow related error | [] | open | false | null | 6 | 2021-03-30T10:38:31Z | 2021-04-01T09:21:17Z | null | null | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931...
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s]
Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s]
Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data.
Traceback (most recent call last):
File "load_wiki.py", line 2, in <module>
ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')
File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset
map_tuple=True,
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp>
_single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm)
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested
return function(data_struct)
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset
in_memory=in_memory,
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read
return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files
pa_table = self._read_files(files, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files
pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename
pa_table = ArrowReader.read_table(filename, in_memory=in_memory)
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table
pa_table = f.read_all()
File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all
File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status
OSError: Expected to be able to read 9176784 bytes for message body, got 4918712
**Detailed version info**
datasets==1.5.0
- dataclasses [required: Any, installed: 0.8]
- dill [required: Any, installed: 0.3.3]
- fsspec [required: Any, installed: 0.8.7]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- huggingface-hub [required: <0.1.0, installed: 0.0.7]
- filelock [required: Any, installed: 3.0.12]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- requests [required: Any, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: Any, installed: 4.49.0]
- importlib-metadata [required: Any, installed: 1.7.0]
- zipp [required: >=0.5, installed: 3.1.0]
- multiprocess [required: Any, installed: 0.70.11.1]
- dill [required: >=0.3.3, installed: 0.3.3]
- numpy [required: >=1.17, installed: 1.17.0]
- pandas [required: Any, installed: 1.1.5]
- numpy [required: >=1.15.4, installed: 1.17.0]
- python-dateutil [required: >=2.7.3, installed: 2.8.0]
- six [required: >=1.5, installed: 1.15.0]
- pytz [required: >=2017.2, installed: 2020.1]
- pyarrow [required: >=0.17.1, installed: 3.0.0]
- numpy [required: >=1.16.6, installed: 1.17.0]
- requests [required: >=2.19.0, installed: 2.24.0]
- certifi [required: >=2017.4.17, installed: 2020.6.20]
- chardet [required: >=3.0.2,<4, installed: 3.0.4]
- idna [required: >=2.5,<3, installed: 2.6]
- urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10]
- tqdm [required: >=4.27,<4.50.0, installed: 4.49.0]
- xxhash [required: Any, installed: 2.0.0]
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2144/timeline | null | null | null | null | false | [
"That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```",
"Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa... |
https://api.github.com/repos/huggingface/datasets/issues/5989 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5989/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5989/comments | https://api.github.com/repos/huggingface/datasets/issues/5989/events | https://github.com/huggingface/datasets/issues/5989 | 1,774,134,091 | I_kwDODunzps5pvyNL | 5,989 | Set a rule on the config and split names | [] | open | false | null | 3 | 2023-06-26T07:34:14Z | 2023-07-19T14:22:54Z | null | null | > should we actually allow characters like spaces? maybe it's better to add validation for whitespace symbols and directly in datasets and raise
https://github.com/huggingface/datasets-server/issues/853
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5989/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5989/timeline | null | null | null | null | false | [
"in this case we need to decide what to do with the existing datasets with white space characters (there shouldn't be a lot of them I think)",
"I imagine that we should stop supporting them, and help the user fix them?",
"See a report where the datasets server fails: https://huggingface.co/datasets/poloclub/dif... |
https://api.github.com/repos/huggingface/datasets/issues/5536 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5536/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5536/comments | https://api.github.com/repos/huggingface/datasets/issues/5536/events | https://github.com/huggingface/datasets/issues/5536 | 1,586,930,643 | I_kwDODunzps5elqPT | 5,536 | Failure to hash function when using .map() | [] | closed | false | null | 8 | 2023-02-16T03:12:07Z | 2023-05-22T20:02:16Z | 2023-02-16T14:56:41Z | null | ### Describe the bug
_Parameter 'function'=<function process at 0x7f1ec4388af0> of the transform datasets.arrow_dataset.Dataset.\_map_single couldn't be hashed properly, a random hash was used instead. Make sure your transforms and parameters are serializable with pickle or dill for the dataset fingerprinting and caching to work. If you reuse this transform, the caching mechanism will consider it to be different from the previous calls and recompute everything. This warning is only showed once. Subsequent hashing failures won't be showed._
This issue with `.map()` happens for me consistently, as also described in closed issue #4506
Dataset indices can be individually serialized using dill and pickle without any errors. I'm using tiktoken to encode in the function passed to map(). Similarly, indices can be individually encoded without error.
### Steps to reproduce the bug
```py
from datasets import load_dataset
import tiktoken
dataset = load_dataset("stas/openwebtext-10k")
enc = tiktoken.get_encoding("gpt2")
tokenized = dataset.map(
process,
remove_columns=['text'],
desc="tokenizing the OWT splits",
)
def process(example):
ids = enc.encode(example['text'])
ids.append(enc.eot_token)
out = {'ids': ids, 'len': len(ids)}
return out
```
### Expected behavior
Should encode simple text objects.
### Environment info
Python versions tried: both 3.8 and 3.10.10
`PYTHONUTF8=1` as env variable
Datasets tried:
- stas/openwebtext-10k
- rotten_tomatoes
- local text file
OS: Ubuntu Linux 20.04
Package versions:
- torch 1.13.1
- dill 0.3.4 (if using 0.3.6 - same issue)
- datasets 2.9.0
- tiktoken 0.2.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5536/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5536/timeline | null | completed | null | null | false | [
"Hi ! `enc` is not hashable:\r\n```python\r\nimport tiktoken\r\nfrom datasets.fingerprint import Hasher\r\n\r\nenc = tiktoken.get_encoding(\"gpt2\")\r\nHasher.hash(enc)\r\n# raises TypeError: cannot pickle 'builtins.CoreBPE' object\r\n```\r\nIt happens because it's not picklable, and because of that it's not possib... |
https://api.github.com/repos/huggingface/datasets/issues/1079 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1079/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1079/comments | https://api.github.com/repos/huggingface/datasets/issues/1079/events | https://github.com/huggingface/datasets/pull/1079 | 756,652,427 | MDExOlB1bGxSZXF1ZXN0NTMyMTY4Nzky | 1,079 | nkjp-ner | [] | closed | false | null | 0 | 2020-12-03T22:47:26Z | 2020-12-04T09:42:06Z | 2020-12-04T09:42:06Z | null | - **Name:** *nkjp-ner*
- **Description:** *The NKJP-NER is based on a human-annotated part of NKJP. We extracted sentences with named entities of exactly one type. The task is to predict the type of the named entity.*
- **Data:** *https://klejbenchmark.com/tasks/*
- **Motivation:** *The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding.*
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1079/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1079/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1079.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1079",
"merged_at": "2020-12-04T09:42:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1079.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1079"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3790/comments | https://api.github.com/repos/huggingface/datasets/issues/3790/events | https://github.com/huggingface/datasets/pull/3790 | 1,150,646,899 | PR_kwDODunzps4zedMa | 3,790 | Add doc builder scripts | [] | closed | false | null | 3 | 2022-02-25T16:38:47Z | 2022-03-01T15:55:42Z | 2022-03-01T15:55:41Z | null | I added the three scripts:
- build_dev_documentation.yml
- build_documentation.yml
- delete_dev_documentation.yml
I got them from `transformers` and did a few changes:
- I removed the `transformers`-specific dependencies
- I changed all the paths to be "datasets" instead of "transformers"
- I passed the `--library_name datasets` arg to the `doc-builder build` command (according to https://github.com/huggingface/doc-builder/pull/94/files#diff-bcc33cf7c223511e498776684a9a433810b527a0a38f483b1487e8a42b6575d3R26)
cc @LysandreJik @mishig25 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3790/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3790",
"merged_at": "2022-03-01T15:55:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3790"
} | true | [
"I think we're only missing the hosted runner to be configured for this repository and we should be good",
"Regarding the self-hosted runner, I actually encourage using the approach defined here: https://github.com/huggingface/transformers/pull/15710, which doesn't leverage a self-hosted runner. This prevents que... |
https://api.github.com/repos/huggingface/datasets/issues/4768 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4768/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4768/comments | https://api.github.com/repos/huggingface/datasets/issues/4768/events | https://github.com/huggingface/datasets/pull/4768 | 1,321,913,645 | PR_kwDODunzps48TRUH | 4,768 | Unpin rouge_score test dependency | [] | closed | false | null | 1 | 2022-07-29T08:17:40Z | 2022-07-29T16:42:28Z | 2022-07-29T16:29:17Z | null | Once `rouge-score` has made the 0.1.2 release to fix their issue https://github.com/google-research/google-research/issues/1212, we can unpin it.
Related to:
- #4735 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4768/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4768/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4768.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4768",
"merged_at": "2022-07-29T16:29:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4768.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4768"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/185/comments | https://api.github.com/repos/huggingface/datasets/issues/185/events | https://github.com/huggingface/datasets/pull/185 | 623,172,484 | MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2 | 185 | [Commands] In-detail instructions to create dummy data folder | [] | closed | false | null | 1 | 2020-05-22T12:26:25Z | 2020-05-22T14:06:35Z | 2020-05-22T14:06:34Z | null | ### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_script>/dummy_data datasets/<dataset_name>/dummy_data_copy` and running the command `python nlp-cli dummy_data ./datasets/<dataset_name>` to see if you like the instructions.
### CONTRIBUTING.md
Also the CONTRIBUTING.md is made cleaner including a new section on "How to add a dataset".
### Current PRs
It would be nice if we can try out if this command helps current PRs, *e.g.* #169 to add a dataset. I comment on those PRs. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/185/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/185.diff",
"html_url": "https://github.com/huggingface/datasets/pull/185",
"merged_at": "2020-05-22T14:06:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/185.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/185"
} | true | [
"awesome !"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.