url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/497/comments | https://api.github.com/repos/huggingface/datasets/issues/497/events | https://github.com/huggingface/datasets/pull/497 | 677,057,116 | MDExOlB1bGxSZXF1ZXN0NDY2MjQ2NDQ3 | 497 | skip header in PAWS-X | [] | closed | false | null | 0 | 2020-08-11T17:26:25Z | 2020-08-19T09:50:02Z | 2020-08-19T09:50:01Z | null | This should fix #485
I also updated the `dataset_infos.json` file that is used to verify the integrity of the generated splits (the number of examples was reduced by one).
Note that there are new fields in `dataset_infos.json` introduced in the latest release 0.4.0 corresponding to post processing info. I removed them in this case when I ran `nlp-cli ./datasets/xtreme --save_infos` to keep backward compatibility (versions 0.3.0 can't load these fields).
I think I'll change the logic so that `nlp-cli test` doesn't create these fields for dataset with no post processing | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/497/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/497/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/497.diff",
"html_url": "https://github.com/huggingface/datasets/pull/497",
"merged_at": "2020-08-19T09:50:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/497.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/497"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5582 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5582/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5582/comments | https://api.github.com/repos/huggingface/datasets/issues/5582/events | https://github.com/huggingface/datasets/pull/5582 | 1,600,932,092 | PR_kwDODunzps5K0ZcN | 5,582 | Add column_names to IterableDataset | [] | closed | false | null | 2 | 2023-02-27T10:50:07Z | 2023-03-13T19:10:22Z | 2023-03-13T19:03:32Z | null | This PR closes #5383
* Add column_names property to IterableDataset
* Add multiple tests for this new property | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5582/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5582/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5582.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5582",
"merged_at": "2023-03-13T19:03:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5582.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5582"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/3866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3866/comments | https://api.github.com/repos/huggingface/datasets/issues/3866/events | https://github.com/huggingface/datasets/pull/3866 | 1,162,833,848 | PR_kwDODunzps40HWcu | 3,866 | Bring back imgs so that forsk dont get broken | [] | closed | false | null | 3 | 2022-03-08T16:01:31Z | 2022-03-08T17:37:02Z | 2022-03-08T17:37:01Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3866/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3866/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3866.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3866",
"merged_at": "2022-03-08T17:37:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3866.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3866"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3866). All of your documentation changes will be reflected on that endpoint.",
"I think we just need to keep `datasets_logo_name.jpg` and `course_banner.png` because they appear in the README.md of the forks of `datasets`. The ... |
https://api.github.com/repos/huggingface/datasets/issues/2189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2189/comments | https://api.github.com/repos/huggingface/datasets/issues/2189/events | https://github.com/huggingface/datasets/issues/2189 | 853,052,891 | MDU6SXNzdWU4NTMwNTI4OTE= | 2,189 | save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | [] | closed | false | null | 1 | 2021-04-08T04:42:53Z | 2022-06-01T16:32:15Z | 2022-06-01T16:32:15Z | null | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)]
final_dataset=concatenate_datasets([kb_list[1],kb_list[2]])
final_dataset.save_to_disk('/home/gsir059/haha/k.arrow')
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2189/timeline | null | completed | null | null | false | [
"Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"
] |
https://api.github.com/repos/huggingface/datasets/issues/3895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3895/comments | https://api.github.com/repos/huggingface/datasets/issues/3895/events | https://github.com/huggingface/datasets/pull/3895 | 1,166,619,182 | PR_kwDODunzps40T1C8 | 3,895 | Fix code examples indentation | [] | closed | false | null | 4 | 2022-03-11T16:29:04Z | 2022-03-11T17:34:30Z | 2022-03-11T17:34:29Z | null | Some code examples are currently not rendered correctly. I think this is because they are over-indented
cc @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3895/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3895/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3895.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3895",
"merged_at": "2022-03-11T17:34:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3895.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3895"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895). All of your documentation changes will be reflected on that endpoint.",
"Still not rendered properly: https://moon-ci-docs.huggingface.co/docs/datasets/pr_3895/en/package_reference/main_classes#datasets.Dataset.align_lab... |
https://api.github.com/repos/huggingface/datasets/issues/2516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2516/comments | https://api.github.com/repos/huggingface/datasets/issues/2516/events | https://github.com/huggingface/datasets/issues/2516 | 924,597,470 | MDU6SXNzdWU5MjQ1OTc0NzA= | 2,516 | datasets.map pickle issue resulting in invalid mapping function | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 7 | 2021-06-18T06:47:26Z | 2021-06-23T13:47:49Z | null | null | I trained my own tokenizer, and I needed to use a python custom class. Because of this I have to detach the custom step before saving and reattach after restore. I did this using the standard pickle `__get_state__` / `__set_state__` mechanism. I think it's correct but it fails when I use it inside a function which is mapped to a dataset, i.e. in the manner of run_mlm.py and other huggingface scripts.
The following reproduces the issue - most likely I'm missing something
A simulated tokeniser which can be pickled
```
class CustomTokenizer:
def __init__(self):
self.state = "init"
def __getstate__(self):
print("__getstate__ called")
out = self.__dict__.copy()
self.state = "pickled"
return out
def __setstate__(self, d):
print("__setstate__ called")
self.__dict__ = d
self.state = "restored"
tokenizer = CustomTokenizer()
```
Test that it actually works - prints "__getstate__ called" and "__setstate__ called"
```
import pickle
serialized = pickle.dumps(tokenizer)
restored = pickle.loads(serialized)
assert restored.state == "restored"
```
Simulate a function that tokenises examples, when dataset.map is called, this function
```
def tokenize_function(examples):
assert tokenizer.state == "restored" # this shouldn't fail but it does
output = tokenizer(examples) # this will fail as tokenizer isn't really a tokenizer
return output
```
Use map to simulate tokenization
```
import glob
from datasets import load_dataset
assert tokenizer.state == "restored"
train_files = glob.glob('train*.csv')
validation_files = glob.glob('validation*.csv')
datasets = load_dataset("csv", data_files=dict(train=train_files, validation=validation_files))
tokenized_datasets = datasets.map(
tokenize_function,
batched=True,
)
```
What's happening is I can see that __getstate__ is called but not __setstate__, so the state of `tokenize_function` is invalid at the point that it's actually executed. This doesn't matter as far as I can see for the standard tokenizers as they don't use __getstate__ / __setstate__. I'm not sure if there's another hook I'm supposed to implement as well?
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-22-a2aef4f74aaa> in <module>
8 tokenized_datasets = datasets.map(
9 tokenize_function,
---> 10 batched=True,
11 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in map(self, function, with_indices, input_columns, batched, batch_size, remove_columns, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, desc)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
487 desc=desc,
488 )
--> 489 for k, dataset in self.items()
490 }
491 )
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1633 fn_kwargs=fn_kwargs,
1634 new_fingerprint=new_fingerprint,
-> 1635 desc=desc,
1636 )
1637 else:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
184 }
185 # apply actual function
--> 186 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
187 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
188 # re-apply format to the output
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
395 # Call actual function
396
--> 397 out = func(self, *args, **kwargs)
398
399 # Update fingerprint of in-place transforms + update in-place history of transforms
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, desc)
1961 indices,
1962 check_same_num_examples=len(input_dataset.list_indexes()) > 0,
-> 1963 offset=offset,
1964 )
1965 except NumExamplesMismatch:
~/.pyenv/versions/3.7.6/envs/xxx/lib/python3.7/site-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset)
1853 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset
1854 processed_inputs = (
-> 1855 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs)
1856 )
1857 if update_data is None:
<ipython-input-21-8ee4a8ba5b1b> in tokenize_function(examples)
1 def tokenize_function(examples):
----> 2 assert tokenizer.state == "restored"
3 tokenizer(examples)
4 return examples
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2516/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2516/timeline | null | null | null | null | false | [
"Hi ! `map` calls `__getstate__` using `dill` to hash your map function. This is used by the caching mechanism to recover previously computed results. That's why you don't see any `__setstate__` call.\r\n\r\nWhy do you change an attribute of your tokenizer when `__getstate__` is called ?",
"@lhoestq because if I ... |
https://api.github.com/repos/huggingface/datasets/issues/452 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/452/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/452/comments | https://api.github.com/repos/huggingface/datasets/issues/452/events | https://github.com/huggingface/datasets/pull/452 | 667,498,295 | MDExOlB1bGxSZXF1ZXN0NDU4MTUzNjQy | 452 | Guardian authorship dataset | [] | closed | false | null | 6 | 2020-07-29T02:23:57Z | 2020-08-20T15:09:57Z | 2020-08-20T15:07:56Z | null | A new dataset: Guardian news articles for authorship attribution
**tests passed:**
python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs
RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship
**Tests failed:**
Real data: RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_guardian_authorship
output: __init__() missing 3 required positional arguments: 'train_folder', 'valid_folder', and 'tes...'
Remarks: This is the init function of my class. I am not sure why it passes in both my tests and with nlp-cli, but fails here. By the way, I ran this command with another 2 datasets and they failed:
* _glue - OSError: Cannot find data file.
*_newsgroup - FileNotFoundError: Local file datasets/newsgroup/dummy/18828_comp.graphics/3.0.0/dummy_data.zip doesn't exist
Thank you for letting us contribute to such a huge and important library!
EDIT:
I was able to fix the dummy_data issue. This dataset has around 14 configurations. I was testing with only 2, but their versions were not in a sequence, they were V1.0.0 and V.12.0.0. It seems that the testing code generates testes for all the versions from 0 to MAX, and was testing for versions (and dummy_data.zip files) that do not exist. I fixed that by changing the versions to 1 and 2.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/452/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/452/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/452.diff",
"html_url": "https://github.com/huggingface/datasets/pull/452",
"merged_at": "2020-08-20T15:07:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/452.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/452"
} | true | [
"Hi ! Glad you managed to fix the version issue.\r\n\r\nThe command `\r\npython nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs` is supposed to generate a json file `dataset_infos.json` next to your dataset script, but I can't see it in the PR.\r\nCan you make sure you have the json file ... |
https://api.github.com/repos/huggingface/datasets/issues/943 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/943/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/943/comments | https://api.github.com/repos/huggingface/datasets/issues/943/events | https://github.com/huggingface/datasets/pull/943 | 754,192,491 | MDExOlB1bGxSZXF1ZXN0NTMwMTM2ODM3 | 943 | The FLUE Benchmark | [] | closed | false | null | 0 | 2020-12-01T09:00:50Z | 2020-12-01T15:24:38Z | 2020-12-01T15:24:30Z | null | This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content.
Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambiguation for Nouns that will be added later. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/943/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/943/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/943.diff",
"html_url": "https://github.com/huggingface/datasets/pull/943",
"merged_at": "2020-12-01T15:24:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/943.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/943"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5374/comments | https://api.github.com/repos/huggingface/datasets/issues/5374/events | https://github.com/huggingface/datasets/issues/5374 | 1,501,872,945 | I_kwDODunzps5ZhMMx | 5,374 | Using too many threads results in: Got disconnected from remote data host. Retrying in 5sec | [] | closed | false | null | 7 | 2022-12-18T11:38:58Z | 2023-07-24T15:23:07Z | 2023-07-24T15:23:07Z | null | ### Describe the bug
`streaming_download_manager` seems to disconnect if too many runs access the same underlying dataset 🧐
The code works fine for me if I have ~100 runs in parallel, but disconnects once scaling to 200.
Possibly related:
- https://github.com/huggingface/datasets/pull/3100
- https://github.com/huggingface/datasets/pull/3050
### Steps to reproduce the bug
Running
```python
c4 = datasets.load_dataset("c4", "en", split="train", streaming=True).skip(args.start).take(args.end-args.start)
df = pd.DataFrame(c4, index=None)
```
with different start & end arguments on 200 CPUs in parallel yields:
```
WARNING:datasets.load:Using the latest cached version of the module from /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/df532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01 (last modified on Mon Dec 12 10:45:02 2022) since it couldn't be found locally at c4.
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [1/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [2/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [3/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [4/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [5/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [6/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [7/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [8/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [9/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [10/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [11/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [12/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [13/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [14/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [15/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [16/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [17/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [18/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [19/20]
WARNING:datasets.download.streaming_download_manager:Got disconnected from remote data host. Retrying in 5sec [20/20]
╭───────────────────── Traceback (most recent call last) ──────────────────────╮
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/dec-2022-tasky/inference │
│ _c4.py:68 in <module> │
│ │
│ 65 │ model.eval() │
│ 66 │ │
│ 67 │ c4 = datasets.load_dataset("c4", "en", split="train", streaming=Tru │
│ ❱ 68 │ df = pd.DataFrame(c4, index=None) │
│ 69 │ texts = df["text"].to_list() │
│ 70 │ preds = batch_inference(texts, batch_size=args.batch_size) │
│ 71 │
│ │
│ /opt/cray/pe/python/3.9.12.1/lib/python3.9/site-packages/pandas/core/frame.p │
│ y:684 in __init__ │
│ │
│ 681 │ │ # For data is list-like, or Iterable (will consume into list │
│ 682 │ │ elif is_list_like(data): │
│ 683 │ │ │ if not isinstance(data, (abc.Sequence, ExtensionArray)): │
│ ❱ 684 │ │ │ │ data = list(data) │
│ 685 │ │ │ if len(data) > 0: │
│ 686 │ │ │ │ if is_dataclass(data[0]): │
│ 687 │ │ │ │ │ data = dataclasses_to_dicts(data) │
│ │
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │
│ lib/python3.9/site-packages/datasets/iterable_dataset.py:751 in __iter__ │
│ │
│ 748 │ │ yield from ex_iterable.shard_data_sources(shard_idx) │
│ 749 │ │
│ 750 │ def __iter__(self): │
│ ❱ 751 │ │ for key, example in self._iter(): │
│ 752 │ │ │ if self.features: │
│ 753 │ │ │ │ # `IterableDataset` automatically fills missing colum │
│ 754 │ │ │ │ # This is done with `_apply_feature_types`. │
│ │
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │
│ lib/python3.9/site-packages/datasets/iterable_dataset.py:741 in _iter │
│ │
│ 738 │ │ │ ex_iterable = self._ex_iterable.shuffle_data_sources(self │
│ 739 │ │ else: │
│ 740 │ │ │ ex_iterable = self._ex_iterable │
│ ❱ 741 │ │ yield from ex_iterable │
│ 742 │ │
│ 743 │ def _iter_shard(self, shard_idx: int): │
│ 744 │ │ if self._shuffling: │
│ │
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │
│ lib/python3.9/site-packages/datasets/iterable_dataset.py:617 in __iter__ │
│ │
│ 614 │ │ self.n = n │
│ 615 │ │
│ 616 │ def __iter__(self): │
│ ❱ 617 │ │ yield from islice(self.ex_iterable, self.n) │
│ 618 │ │
│ 619 │ def shuffle_data_sources(self, generator: np.random.Generator) -> │
│ 620 │ │ """Doesn't shuffle the wrapped examples iterable since it wou │
│ │
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │
│ lib/python3.9/site-packages/datasets/iterable_dataset.py:594 in __iter__ │
│ │
│ 591 │ │
│ 592 │ def __iter__(self): │
│ 593 │ │ #ex_iterator = iter(self.ex_iterable) │
│ ❱ 594 │ │ yield from islice(self.ex_iterable, self.n, None) │
│ 595 │ │ #for _ in range(self.n): │
│ 596 │ │ # next(ex_iterator) │
│ 597 │ │ #yield from islice(ex_iterator, self.n, None) │
│ │
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │
│ lib/python3.9/site-packages/datasets/iterable_dataset.py:106 in __iter__ │
│ │
│ 103 │ │ self.kwargs = kwargs │
│ 104 │ │
│ 105 │ def __iter__(self): │
│ ❱ 106 │ │ yield from self.generate_examples_fn(**self.kwargs) │
│ 107 │ │
│ 108 │ def shuffle_data_sources(self, generator: np.random.Generator) -> │
│ 109 │ │ return ShardShuffledExamplesIterable(self.generate_examples_f │
│ │
│ /users/muennighoff/.cache/huggingface/modules/datasets_modules/datasets/c4/d │
│ f532b158939272d032cc63ef19cd5b83e9b4d00c922b833e4cb18b2e9869b01/c4.py:89 in │
│ _generate_examples │
│ │
│ 86 │ │ for filepath in filepaths: │
│ 87 │ │ │ logger.info("generating examples from = %s", filepath) │
│ 88 │ │ │ with gzip.open(open(filepath, "rb"), "rt", encoding="utf-8" │
│ ❱ 89 │ │ │ │ for line in f: │
│ 90 │ │ │ │ │ if line: │
│ 91 │ │ │ │ │ │ example = json.loads(line) │
│ 92 │ │ │ │ │ │ yield id_, example │
│ │
│ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:313 in read1 │
│ │
│ 310 │ │ │
│ 311 │ │ if size < 0: │
│ 312 │ │ │ size = io.DEFAULT_BUFFER_SIZE │
│ ❱ 313 │ │ return self._buffer.read1(size) │
│ 314 │ │
│ 315 │ def peek(self, n): │
│ 316 │ │ self._check_not_closed() │
│ │
│ /opt/cray/pe/python/3.9.12.1/lib/python3.9/_compression.py:68 in readinto │
│ │
│ 65 │ │
│ 66 │ def readinto(self, b): │
│ 67 │ │ with memoryview(b) as view, view.cast("B") as byte_view: │
│ ❱ 68 │ │ │ data = self.read(len(byte_view)) │
│ 69 │ │ │ byte_view[:len(data)] = data │
│ 70 │ │ return len(data) │
│ 71 │
│ │
│ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:493 in read │
│ │
│ 490 │ │ │ │ self._new_member = False │
│ 491 │ │ │ │
│ 492 │ │ │ # Read a chunk of data from the file │
│ ❱ 493 │ │ │ buf = self._fp.read(io.DEFAULT_BUFFER_SIZE) │
│ 494 │ │ │ │
│ 495 │ │ │ uncompress = self._decompressor.decompress(buf, size) │
│ 496 │ │ │ if self._decompressor.unconsumed_tail != b"": │
│ │
│ /opt/cray/pe/python/3.9.12.1/lib/python3.9/gzip.py:96 in read │
│ │
│ 93 │ │ │ read = self._read │
│ 94 │ │ │ self._read = None │
│ 95 │ │ │ return self._buffer[read:] + \ │
│ ❱ 96 │ │ │ │ self.file.read(size-self._length+read) │
│ 97 │ │
│ 98 │ def prepend(self, prepend=b''): │
│ 99 │ │ if self._read is None: │
│ │
│ /pfs/lustrep4/scratch/project_462000119/muennighoff/nov-2022-bettercom/venv/ │
│ lib/python3.9/site-packages/datasets/download/streaming_download_manager.py: │
│ 365 in read_with_retries │
│ │
│ 362 │ │ │ │ ) │
│ 363 │ │ │ │ time.sleep(config.STREAMING_READ_RETRY_INTERVAL) │
│ 364 │ │ else: │
│ ❱ 365 │ │ │ raise ConnectionError("Server Disconnected") │
│ 366 │ │ return out │
│ 367 │ │
│ 368 │ file_obj.read = read_with_retries │
╰──────────────────────────────────────────────────────────────────────────────╯
ConnectionError: Server Disconnected
```
### Expected behavior
There should be no disconnect I think.
### Environment info
```
datasets=2.7.0
Python 3.9.12
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5374/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5374/timeline | null | completed | null | null | false | [
"The data files are hosted on HF at https://huggingface.co/datasets/allenai/c4/tree/main\r\n\r\nYou have 200 runs streaming the same files in parallel. So this is probably a Hub limitation. Maybe rate limiting ? cc @julien-c \r\n\r\nMaybe you can also try to reduce the number of HTTP requests by increasing the bloc... |
https://api.github.com/repos/huggingface/datasets/issues/657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/657/comments | https://api.github.com/repos/huggingface/datasets/issues/657/events | https://github.com/huggingface/datasets/issues/657 | 706,204,383 | MDU6SXNzdWU3MDYyMDQzODM= | 657 | Squad Metric Description & Feature Mismatch | [] | closed | false | null | 2 | 2020-09-22T09:07:00Z | 2020-10-13T02:16:56Z | 2020-09-29T15:57:38Z | null | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/657/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/657/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `refere... |
https://api.github.com/repos/huggingface/datasets/issues/4186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4186/comments | https://api.github.com/repos/huggingface/datasets/issues/4186/events | https://github.com/huggingface/datasets/pull/4186 | 1,209,463,599 | PR_kwDODunzps42evF5 | 4,186 | Fix outdated docstring about default dataset config | [] | closed | false | null | 1 | 2022-04-20T10:04:51Z | 2022-04-22T12:54:44Z | 2022-04-22T12:48:31Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4186/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4186/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4186.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4186",
"merged_at": "2022-04-22T12:48:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4186.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4186"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2548 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2548/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2548/comments | https://api.github.com/repos/huggingface/datasets/issues/2548/events | https://github.com/huggingface/datasets/issues/2548 | 929,232,831 | MDU6SXNzdWU5MjkyMzI4MzE= | 2,548 | Field order issue in loading json | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-06-24T13:29:53Z | 2021-06-24T14:36:43Z | 2021-06-24T14:34:05Z | null | ## Describe the bug
The `load_dataset` function expects columns in alphabetical order when loading json files.
Similar bug was previously reported for csv in #623 and fixed in #684.
## Steps to reproduce the bug
For a json file `j.json`,
```
{"c":321, "a": 1, "b": 2}
```
Running the following,
```
f= datasets.Features({'a': Value('int32'), 'b': Value('int32'), 'c': Value('int32')})
json_data = datasets.load_dataset('json', data_files='j.json', features=f)
```
## Expected results
A successful load.
## Actual results
```
File "pyarrow/table.pxi", line 1409, in pyarrow.lib.Table.cast
ValueError: Target schema's field names are not matching the table's field names: ['c', 'a', 'b'], ['a', 'b', 'c']
```
## Environment info
- `datasets` version: 1.8.0
- Platform: Linux-3.10.0-957.1.3.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyArrow version: 3.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2548/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2548/timeline | null | completed | null | null | false | [
"Hi @luyug, thanks for reporting.\r\n\r\nThe good news is that we fixed this issue only 9 days ago: #2507.\r\n\r\nThe patch is already in the master branch of our repository and it will be included in our next `datasets` release version 1.9.0.\r\n\r\nFeel free to reopen the issue if the problem persists."
] |
https://api.github.com/repos/huggingface/datasets/issues/2333 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2333/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2333/comments | https://api.github.com/repos/huggingface/datasets/issues/2333/events | https://github.com/huggingface/datasets/pull/2333 | 879,214,067 | MDExOlB1bGxSZXF1ZXN0NjMyOTUwNzIy | 2,333 | Fix duplicate keys | [] | closed | false | null | 1 | 2021-05-07T15:28:08Z | 2021-05-08T21:47:31Z | 2021-05-07T15:57:08Z | null | As noticed in https://github.com/huggingface/datasets/pull/2245, many datasets yield duplicate keys.
Most of the time it was because the counter used for ids were reset at each new data file. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2333/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2333/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2333.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2333",
"merged_at": "2021-05-07T15:57:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2333.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2333"
} | true | [
"- @jplu "
] |
https://api.github.com/repos/huggingface/datasets/issues/1755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1755/comments | https://api.github.com/repos/huggingface/datasets/issues/1755/events | https://github.com/huggingface/datasets/issues/1755 | 790,324,734 | MDU6SXNzdWU3OTAzMjQ3MzQ= | 1,755 | Using select/reordering datasets slows operations down immensely | [] | closed | false | null | 2 | 2021-01-20T21:12:12Z | 2021-01-20T22:03:39Z | 2021-01-20T22:03:39Z | null | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below example should be reproducible and I have ran myself down this path because I want to use HF's scoring functions and helpful data preparation, but use my own trainer. The training process uses shuffle and therefore the order I trained on no longer matches the original data set order. So, to score my results correctly, the original data set needs to match the order of the training. This requires that I: (1) collect the index for each row of data emitted during training, and (2) use this index information to re-order the datasets correctly so the orders match when I go to score.
The problem is, the dataset class starts performing very poorly as soon as you start manipulating its order by immense magnitudes.
```
from datasets import load_dataset, load_metric
from transformers import BertTokenizerFast, BertForQuestionAnswering
from elasticsearch import Elasticsearch
import numpy as np
import collections
from tqdm.auto import tqdm
import torch
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased')
max_length = 384 # The maximum length of a feature (question and context)
doc_stride = 128 # The authorized overlap between two part of the context when splitting it is needed.
pad_on_right = tokenizer.padding_side == "right"
squad_v2 = True
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
def prepare_validation_features(examples):
# Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results
# in one example possible giving several features when a context is long, each of those features having a
# context that overlaps a bit the context of the previous feature.
tokenized_examples = tokenizer(
examples["question" if pad_on_right else "context"],
examples["context" if pad_on_right else "question"],
truncation="only_second" if pad_on_right else "only_first",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# We keep the example_id that gave us this feature and we will store the offset mappings.
tokenized_examples["example_id"] = []
for i in range(len(tokenized_examples["input_ids"])):
# Grab the sequence corresponding to that example (to know what is the context and what is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
context_index = 1 if pad_on_right else 0
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
tokenized_examples["example_id"].append(examples["id"][sample_index])
# Set to None the offset_mapping that are not part of the context so it's easy to determine if a token
# position is part of the context or not.
tokenized_examples["offset_mapping"][i] = [
(list(o) if sequence_ids[k] == context_index else None)
for k, o in enumerate(tokenized_examples["offset_mapping"][i])
]
return tokenized_examples
# from https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb#scrollTo=941LPhDWeYv-
def postprocess_qa_predictions(examples, features, starting_logits, ending_logits, n_best_size = 20, max_answer_length = 30):
all_start_logits, all_end_logits = starting_logits, ending_logits
# Build a map example to its corresponding features.
example_id_to_index = {k: i for i, k in enumerate(examples["id"])}
features_per_example = collections.defaultdict(list)
for i, feature in enumerate(features):
features_per_example[example_id_to_index[feature["example_id"]]].append(i)
# The dictionaries we have to fill.
predictions = collections.OrderedDict()
# Logging.
print(f"Post-processing {len(examples)} example predictions split into {len(features)} features.")
# Let's loop over all the examples!
for example_index, example in enumerate(tqdm(examples)):
# Those are the indices of the features associated to the current example.
feature_indices = features_per_example[example_index]
min_null_score = None # Only used if squad_v2 is True.
valid_answers = []
context = example["context"]
# Looping through all the features associated to the current example.
for feature_index in feature_indices:
# We grab the predictions of the model for this feature.
start_logits = all_start_logits[feature_index]
end_logits = all_end_logits[feature_index]
# This is what will allow us to map some the positions in our logits to span of texts in the original
# context.
offset_mapping = features[feature_index]["offset_mapping"]
# Update minimum null prediction.
cls_index = features[feature_index]["input_ids"].index(tokenizer.cls_token_id)
feature_null_score = start_logits[cls_index] + end_logits[cls_index]
if min_null_score is None or min_null_score < feature_null_score:
min_null_score = feature_null_score
# Go through all possibilities for the `n_best_size` greater start and end logits.
start_indexes = np.argsort(start_logits)[-1 : -n_best_size - 1 : -1].tolist()
end_indexes = np.argsort(end_logits)[-1 : -n_best_size - 1 : -1].tolist()
for start_index in start_indexes:
for end_index in end_indexes:
# Don't consider out-of-scope answers, either because the indices are out of bounds or correspond
# to part of the input_ids that are not in the context.
if (
start_index >= len(offset_mapping)
or end_index >= len(offset_mapping)
or offset_mapping[start_index] is None
or offset_mapping[end_index] is None
):
continue
# Don't consider answers with a length that is either < 0 or > max_answer_length.
if end_index < start_index or end_index - start_index + 1 > max_answer_length:
continue
start_char = offset_mapping[start_index][0]
end_char = offset_mapping[end_index][1]
valid_answers.append(
{
"score": start_logits[start_index] + end_logits[end_index],
"text": context[start_char: end_char]
}
)
if len(valid_answers) > 0:
best_answer = sorted(valid_answers, key=lambda x: x["score"], reverse=True)[0]
else:
# In the very rare edge case we have not a single non-null prediction, we create a fake prediction to avoid
# failure.
best_answer = {"text": "", "score": 0.0}
# Let's pick our final answer: the best one or the null answer (only for squad_v2)
if not squad_v2:
predictions[example["id"]] = best_answer["text"]
else:
answer = best_answer["text"] if best_answer["score"] > min_null_score else ""
predictions[example["id"]] = answer
return predictions
# build base examples, features from training data
examples = load_dataset("squad_v2").shuffle(seed=5)['train']
features = load_dataset("squad_v2").shuffle(seed=5)['train'].map(
prepare_validation_features,
batched=True,
remove_columns=['answers', 'context', 'id', 'question', 'title'])
# sim some shuffled training indices that we want to use to re-order the data to compare how we did
shuffle_idx = np.arange(0, 131754)
np.random.shuffle(shuffle_idx)
# create a new dataset with rows selected following the training shuffle
features = features.select(indices=shuffle_idx)
# get unique example ids to match with the "example" data
id_list = list(dict.fromkeys(features['example_id']))
# now search for their index positions; load elastic search
es = Elasticsearch([{'host': 'localhost'}]).ping()
# add an index to the id column for the examples
examples.add_elasticsearch_index(column='id')
# search the examples for their index position
example_idx = [examples.search(index_name='id', query=i, k=1).indices for i in id_list]
# drop the elastic search
examples.drop_index(index_name='id')
# put examples in the right order
examples = examples.select(indices=example_idx)
# generate some fake data
logits = {'starting_logits': torch.randn(131754, 384), 'ending_logits': torch.randn(131754, 384)}
def score_squad(logits, n_best_size, max_answer):
# proceed with QA calculation
final_predictions = postprocess_qa_predictions(examples=examples,
features=features,
starting_logits=logits['starting_logits'],
ending_logits=logits['ending_logits'],
n_best_size=20,
max_answer_length=30)
metric = load_metric("squad_v2")
formatted_predictions = [{"id": k, "prediction_text": v, "no_answer_probability": 0.0} for k, v in final_predictions.items()]
references = [{"id": ex["id"], "answers": ex["answers"]} for ex in examples]
metrics = metric.compute(predictions=formatted_predictions, references=references)
return metrics
metrics = score_squad(logits, n_best_size=20, max_answer=30)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1755/timeline | null | completed | null | null | false | [
"You can use `Dataset.flatten_indices()` to make it fast after a select or shuffle.",
"Thanks for the input! I gave that a try by adding this after my selection / reordering operations, but before the big computation task of `score_squad`\r\n\r\n```\r\nexamples = examples.flatten_indices()\r\nfeatures = features.... |
https://api.github.com/repos/huggingface/datasets/issues/5647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5647/comments | https://api.github.com/repos/huggingface/datasets/issues/5647/events | https://github.com/huggingface/datasets/issues/5647 | 1,628,225,544 | I_kwDODunzps5hDMAI | 5,647 | Make all print statements optional | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2023-03-16T20:30:07Z | 2023-07-21T14:20:25Z | 2023-07-21T14:20:24Z | null | ### Feature request
Make all print statements optional to speed up the development
### Motivation
Im loading multiple tiny datasets and all the print statements make the loading slower
### Your contribution
I can help contribute | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5647/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5647/timeline | null | completed | null | null | false | [
"related to #5444 ",
"We now log these messages instead of printing them (addressed in #6019), so I'm closing this issue."
] |
https://api.github.com/repos/huggingface/datasets/issues/728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/728/comments | https://api.github.com/repos/huggingface/datasets/issues/728/events | https://github.com/huggingface/datasets/issues/728 | 719,555,780 | MDU6SXNzdWU3MTk1NTU3ODA= | 728 | Passing `cache_dir` to a metric does not work | [] | closed | false | null | 0 | 2020-10-12T17:55:14Z | 2020-10-29T09:34:42Z | 2020-10-29T09:34:42Z | null | When passing `cache_dir` to a custom metric, the folder is concatenated to itself at some point and this results in a FileNotFoundError:
## Reproducer
```python
import datasets
import torch
from datasets import Metric
class GatherMetric(Metric):
def _info(self):
return datasets.MetricInfo(
description="description",
citation="citation",
inputs_description="kwargs",
features=datasets.Features({
'predictions': datasets.Value('int64'),
'references': datasets.Value('int64'),
}),
codebase_urls=[],
reference_urls=[],
format='numpy'
)
def _compute(self, predictions, references):
return {"predictions": predictions, "labels": references}
metric = GatherMetric(cache_dir="test-metric")
inputs = torch.randint(0, 2, (1024,))
targets = torch.randint(0, 2, (1024,))
batch_size = 8
for i in range(0, 1024, batch_size):
metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
result = metric.compute()
```
## Stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
~/git/datasets/src/datasets/metric.py in _finalize(self)
349 reader = ArrowReader(path=self.data_dir, info=DatasetInfo(features=self.features))
--> 350 self.data = Dataset(**reader.read_files([{"filename": f} for f in file_paths]))
351 except FileNotFoundError:
~/git/datasets/src/datasets/arrow_reader.py in read_files(self, files, original_instructions)
227 # Prepend path to filename
--> 228 pa_table = self._read_files(files)
229 files = copy.deepcopy(files)
~/git/datasets/src/datasets/arrow_reader.py in _read_files(self, files)
166 for f_dict in files:
--> 167 pa_table: pa.Table = self._get_dataset_from_filename(f_dict)
168 pa_tables.append(pa_table)
~/git/datasets/src/datasets/arrow_reader.py in _get_dataset_from_filename(self, filename_skip_take)
291 )
--> 292 mmap = pa.memory_map(filename)
293 f = pa.ipc.open_stream(mmap)
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.memory_map()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/io.pxi in pyarrow.lib.MemoryMappedFile._open()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
~/.pyenv/versions/3.7.9/envs/base/lib/python3.7/site-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
FileNotFoundError: [Errno 2] Failed to open local file 'test-metric/gather_metric/default/test-metric/gather_metric/default/default_experiment-1-0.arrow'. Detail: [errno 2] No such file or directory
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-17-e42d43cc981f> in <module>
2 for i in range(0, 1024, batch_size):
3 metric.add_batch(predictions=inputs[i:i+batch_size], references=targets[i:i+batch_size])
----> 4 result = metric.compute()
~/git/datasets/src/datasets/metric.py in compute(self, *args, **kwargs)
380 if predictions is not None:
381 self.add_batch(predictions=predictions, references=references)
--> 382 self._finalize()
383
384 self.cache_file_name = None
~/git/datasets/src/datasets/metric.py in _finalize(self)
351 except FileNotFoundError:
352 raise ValueError(
--> 353 "Error in finalize: another metric instance is already using the local cache file. "
354 "Please specify an experiment_id to avoid colision between distributed metric instances."
355 )
ValueError: Error in finalize: another metric instance is already using the local cache file. Please specify an experiment_id to avoid colision between distributed metric instances.
```
The code works when we remove the `cache_dir=...` from the metric. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/728/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/974/comments | https://api.github.com/repos/huggingface/datasets/issues/974/events | https://github.com/huggingface/datasets/pull/974 | 754,811,185 | MDExOlB1bGxSZXF1ZXN0NTMwNjQzNzQ3 | 974 | Add MeTooMA Dataset | [] | closed | false | null | 0 | 2020-12-01T23:44:01Z | 2020-12-01T23:57:58Z | 2020-12-01T23:57:58Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/974/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/974",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/974"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/2579 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2579/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2579/comments | https://api.github.com/repos/huggingface/datasets/issues/2579/events | https://github.com/huggingface/datasets/pull/2579 | 935,486,894 | MDExOlB1bGxSZXF1ZXN0NjgyMzkyNjYx | 2,579 | Fix BibTeX entry | [] | closed | false | null | 0 | 2021-07-02T07:10:40Z | 2021-07-02T07:33:44Z | 2021-07-02T07:33:44Z | null | Add missing contributor to BibTeX entry.
cc: @abhishekkrthakur @thomwolf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2579/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2579/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2579.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2579",
"merged_at": "2021-07-02T07:33:44Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2579.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2579"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1581 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1581/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1581/comments | https://api.github.com/repos/huggingface/datasets/issues/1581/events | https://github.com/huggingface/datasets/issues/1581 | 768,320,594 | MDU6SXNzdWU3NjgzMjA1OTQ= | 1,581 | Installing datasets and transformers in a tensorflow docker image throws Permission Error on 'import transformers' | [] | closed | false | null | 5 | 2020-12-16T00:02:21Z | 2021-06-17T15:40:45Z | 2021-06-17T15:40:45Z | null | I am using a docker container, based on latest tensorflow-gpu image, to run transformers and datasets (4.0.1 and 1.1.3 respectively - Dockerfile attached below). Importing transformers throws a Permission Error to access `/.cache`:
```
$ docker run --gpus=all --rm -it -u $(id -u):$(id -g) -v $(pwd)/data:/root/data -v $(pwd):/root -v $(pwd)/models/:/root/models -v $(pwd)/saved_models/:/root/saved_models -e "HOST_HOSTNAME=$(hostname)" hf-error:latest /bin/bash
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
You are running this container as user with ID 1000 and group 1000,
which should map to the ID and group for your user on the Docker host. Great!
tf-docker /root > python
Python 3.6.9 (default, Oct 8 2020, 12:12:24)
[GCC 8.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import transformers
2020-12-15 23:53:21.165827: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module>
from .integrations import ( # isort:skip
File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 5, in <module>
from .trainer_utils import EvaluationStrategy
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 25, in <module>
from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available
File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 88, in <module>
import datasets # noqa: F401
File "/usr/local/lib/python3.6/dist-packages/datasets/__init__.py", line 26, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py", line 40, in <module>
from .arrow_reader import ArrowReader
File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 31, in <module>
from .utils import cached_path, logging
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/__init__.py", line 20, in <module>
from .download_manager import DownloadManager, GenerateMode
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/download_manager.py", line 25, in <module>
from .file_utils import HF_DATASETS_CACHE, cached_path, get_from_cache, hash_url_to_filename
File "/usr/local/lib/python3.6/dist-packages/datasets/utils/file_utils.py", line 118, in <module>
os.makedirs(HF_MODULES_CACHE, exist_ok=True)
File "/usr/lib/python3.6/os.py", line 210, in makedirs
makedirs(head, mode, exist_ok)
File "/usr/lib/python3.6/os.py", line 210, in makedirs
makedirs(head, mode, exist_ok)
File "/usr/lib/python3.6/os.py", line 220, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/.cache'
```
I've pinned the problem to `RUN pip install datasets`, and by commenting it you can actually import transformers correctly. Another workaround I've found is creating the directory and giving permissions to it directly on the Dockerfile.
```
FROM tensorflow/tensorflow:latest-gpu-jupyter
WORKDIR /root
EXPOSE 80
EXPOSE 8888
EXPOSE 6006
ENV SHELL /bin/bash
ENV PATH="/root/.local/bin:${PATH}"
ENV CUDA_CACHE_PATH="/root/cache/cuda"
ENV CUDA_CACHE_MAXSIZE="4294967296"
ENV TFHUB_CACHE_DIR="/root/cache/tfhub"
RUN pip install --upgrade pip
RUN apt update -y && apt upgrade -y
RUN pip install transformers
#Installing datasets will throw the error, try commenting and rebuilding
RUN pip install datasets
#Another workaround is creating the directory and give permissions explicitly
#RUN mkdir /.cache
#RUN chmod 777 /.cache
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1581/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1581/timeline | null | completed | null | null | false | [
"Thanks for reporting !\r\nYou can override the directory in which cache file are stored using for example\r\n```\r\nENV HF_HOME=\"/root/cache/hf_cache_home\"\r\n```\r\n\r\nThis way both `transformers` and `datasets` will use this directory instead of the default `.cache`",
"Great, thanks. I didn't see documentat... |
https://api.github.com/repos/huggingface/datasets/issues/316 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/316/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/316/comments | https://api.github.com/repos/huggingface/datasets/issues/316/events | https://github.com/huggingface/datasets/pull/316 | 646,366,450 | MDExOlB1bGxSZXF1ZXN0NDQwNjY5NzY5 | 316 | add AG News dataset | [] | closed | false | null | 1 | 2020-06-26T16:11:58Z | 2020-06-30T09:58:08Z | 2020-06-30T08:31:55Z | null | adds support for the AG-News topic classification dataset | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/316/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/316/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/316.diff",
"html_url": "https://github.com/huggingface/datasets/pull/316",
"merged_at": "2020-06-30T08:31:55Z",
"patch_url": "https://github.com/huggingface/datasets/pull/316.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/316"
} | true | [
"Thanks @jxmorris12 for adding this adding. \r\nCan you please add a small description of the PR?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6045/comments | https://api.github.com/repos/huggingface/datasets/issues/6045/events | https://github.com/huggingface/datasets/pull/6045 | 1,808,072,270 | PR_kwDODunzps5Vr-r1 | 6,045 | Check if column names match in Parquet loader only when config `features` are specified | [] | closed | false | null | 8 | 2023-07-17T15:50:15Z | 2023-07-24T14:45:56Z | 2023-07-24T14:35:03Z | null | Fix #6039 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6045/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6045/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6045",
"merged_at": "2023-07-24T14:35:03Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6045"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6080 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6080/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6080/comments | https://api.github.com/repos/huggingface/datasets/issues/6080/events | https://github.com/huggingface/datasets/pull/6080 | 1,822,667,554 | PR_kwDODunzps5WdL4K | 6,080 | Remove README link to deprecated Colab notebook | [] | closed | false | null | 3 | 2023-07-26T15:27:49Z | 2023-07-26T16:24:43Z | 2023-07-26T16:14:34Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6080/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6080/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/6080.diff",
"html_url": "https://github.com/huggingface/datasets/pull/6080",
"merged_at": "2023-07-26T16:14:34Z",
"patch_url": "https://github.com/huggingface/datasets/pull/6080.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6080"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2107 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2107/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2107/comments | https://api.github.com/repos/huggingface/datasets/issues/2107/events | https://github.com/huggingface/datasets/pull/2107 | 839,495,825 | MDExOlB1bGxSZXF1ZXN0NTk5NTAxODE5 | 2,107 | Metadata validation | [] | closed | false | null | 5 | 2021-03-24T08:52:41Z | 2021-04-26T08:27:14Z | 2021-04-26T08:27:13Z | null | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365~ 378 datasets with invalid metadata! full error report [_here_.](https://gist.github.com/theo-m/61b3c0c47fc6121d08d3174bd4c2a26b) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2107/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2107/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2107.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2107",
"merged_at": "2021-04-26T08:27:13Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2107.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2107"
} | true | [
"> Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.\r\n\r\nI'm unclear on the suggestion, would you rather have a root-level `./metadata.py` file? I think it's well whe... |
https://api.github.com/repos/huggingface/datasets/issues/3728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3728/comments | https://api.github.com/repos/huggingface/datasets/issues/3728/events | https://github.com/huggingface/datasets/issues/3728 | 1,139,303,614 | I_kwDODunzps5D6GS- | 3,728 | VoxPopuli | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 1 | 2022-02-15T23:04:55Z | 2022-02-16T18:49:12Z | 2022-02-16T18:49:12Z | null | ## Adding a Dataset
- **Name:** VoxPopuli
- **Description:** A Large-Scale Multilingual Speech Corpus
- **Paper:** https://arxiv.org/pdf/2101.00390.pdf
- **Data:** https://github.com/facebookresearch/voxpopuli
- **Motivation:** one of the largest (if not the largest) multilingual speech corpus: 400K hours of multilingual unlabeled speech + 17k hours of labeled speech
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
👀 @kahne @Molugan
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3728/timeline | null | completed | null | null | false | [
"duplicate of https://github.com/huggingface/datasets/issues/2300"
] |
https://api.github.com/repos/huggingface/datasets/issues/2218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2218/comments | https://api.github.com/repos/huggingface/datasets/issues/2218/events | https://github.com/huggingface/datasets/issues/2218 | 857,238,435 | MDU6SXNzdWU4NTcyMzg0MzU= | 2,218 | Duplicates in the LAMA dataset | [] | open | false | null | 3 | 2021-04-13T18:59:49Z | 2021-04-14T21:42:27Z | null | null | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c1fe1ec0d6b5eece7bddc)
>>> train_dataset = dataset['train']
>>> train_dataset[0]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
>>> train_dataset[1]
{'description': 'language or languages a person has learned from early childhood', 'label': 'native language', 'masked_sentence': 'Louis Jules Trochu ([lwi ʒyl tʁɔʃy]; 12 March 1815 – 7 October 1896) was a [MASK] military leader and politician.', 'obj_label': 'French', 'obj_surface': 'French', 'obj_uri': 'Q150', 'predicate_id': 'P103', 'sub_label': 'Louis Jules Trochu', 'sub_surface': 'Louis Jules Trochu', 'sub_uri': 'Q441235', 'template': 'The native language of [X] is [Y] .', 'template_negated': '[X] is not owned by [Y] .', 'type': 'N-1', 'uuid': '40b2ed1c-0961-482e-844e-32596b6117c8'}
```
I checked the original data available at https://dl.fbaipublicfiles.com/LAMA/data.zip. This particular duplicated comes from:
```
{"uuid": "40b2ed1c-0961-482e-844e-32596b6117c8", "obj_uri": "Q150", "obj_label": "French", "sub_uri": "Q441235", "sub_label": "Louis Jules Trochu", "predicate_id": "P103", "evidences": [{"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}, {"sub_surface": "Louis Jules Trochu", "obj_surface": "French", "masked_sentence": "Louis Jules Trochu ([lwi \u0292yl t\u0281\u0254\u0283y]; 12 March 1815 \u2013 7 October 1896) was a [MASK] military leader and politician."}]}
```
What is the best way to deal with these duplicates if I want to use `datasets` to probe with LAMA? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2218/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2218/timeline | null | null | null | null | false | [
"Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', spl... |
https://api.github.com/repos/huggingface/datasets/issues/3460 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3460/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3460/comments | https://api.github.com/repos/huggingface/datasets/issues/3460/events | https://github.com/huggingface/datasets/pull/3460 | 1,085,002,469 | PR_kwDODunzps4wFyCf | 3,460 | Don't encode lists as strings when using `Value("string")` | [] | open | false | null | 0 | 2021-12-20T16:50:49Z | 2022-07-06T15:19:49Z | null | null | Following https://github.com/huggingface/datasets/pull/3456#event-5792250497 it looks like `datasets` can silently convert lists to strings using `str()`, instead of raising an error.
This PR fixes this and should fix the issue with WER showing low values if the input format is not right. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3460/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3460/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3460.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3460",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3460.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3460"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2770 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2770/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2770/comments | https://api.github.com/repos/huggingface/datasets/issues/2770/events | https://github.com/huggingface/datasets/pull/2770 | 963,246,512 | MDExOlB1bGxSZXF1ZXN0NzA1OTAzMzIy | 2,770 | Add support for fast tokenizer in BertScore | [] | closed | false | null | 0 | 2021-08-07T15:00:03Z | 2021-08-09T12:34:43Z | 2021-08-09T11:16:25Z | null | This PR adds support for a fast tokenizer in BertScore, which has been added recently to the lib.
Fixes #2765 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2770/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2770/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2770.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2770",
"merged_at": "2021-08-09T11:16:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2770.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2770"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5863/comments | https://api.github.com/repos/huggingface/datasets/issues/5863/events | https://github.com/huggingface/datasets/pull/5863 | 1,710,335,905 | PR_kwDODunzps5QhtlM | 5,863 | Use a new low-memory approach for tf dataset index shuffling | [] | closed | false | null | 36 | 2023-05-15T15:28:34Z | 2023-06-08T16:40:18Z | 2023-06-08T16:32:51Z | null | This PR tries out a new approach to generating the index tensor in `to_tf_dataset`, which should reduce memory usage for very large datasets. I'll need to do some testing before merging it!
Fixes #5855 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5863/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5863.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5863",
"merged_at": "2023-06-08T16:32:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5863.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5863"
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5863). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/4191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4191/comments | https://api.github.com/repos/huggingface/datasets/issues/4191/events | https://github.com/huggingface/datasets/issues/4191 | 1,210,028,090 | I_kwDODunzps5IH5A6 | 4,191 | feat: create an `Array3D` column from a list of arrays of dimension 2 | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2022-04-20T18:04:32Z | 2022-05-12T15:08:40Z | 2022-05-12T15:08:40Z | null | **Is your feature request related to a problem? Please describe.**
It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1.
To illustrate my proposal, let's take the following toy dataset t:
```python
import numpy as np
from datasets import Dataset, features
data_map = {
1: np.array([[0.2, 0,4],[0.19, 0,3]]),
2: np.array([[0.1, 0,4],[0.19, 0,3]]),
}
def create_toy_ds():
my_dict = {"id":[1, 2]}
return Dataset.from_dict(my_dict)
ds = create_toy_ds()
```
The following 2D processing works without any errors raised:
```python
def prepare_dataset_2D(batch):
batch["pixel_values"] = [data_map[index] for index in batch["id"]]
return batch
ds_2D = ds.map(
prepare_dataset_2D,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array2D(shape=(2, 3), dtype="float32")})
)
```
The following 3D processing doesn't work:
```python
def prepare_dataset_3D(batch):
batch["pixel_values"] = [[data_map[index]] for index in batch["id"]]
return batch
ds_3D = ds.map(
prepare_dataset_3D,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3, dtype="float32")})
)
```
The error raised is:
```
---------------------------------------------------------------------------
ArrowInvalid Traceback (most recent call last)
[<ipython-input-6-676547e4cd41>](https://localhost:8080/#) in <module>()
3 batched=True,
4 remove_columns=ds.column_names,
----> 5 features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")})
6 )
12 frames
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
1971 new_fingerprint=new_fingerprint,
1972 disable_tqdm=disable_tqdm,
-> 1973 desc=desc,
1974 )
1975 else:
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
518 self: "Dataset" = kwargs.pop("self")
519 # apply actual function
--> 520 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
521 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
522 for dataset in datasets:
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
485 }
486 # apply actual function
--> 487 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
488 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
489 # re-apply format to the output
[/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
456 # Call actual function
457
--> 458 out = func(self, *args, **kwargs)
459
460 # Update fingerprint of in-place transforms + update in-place history of transforms
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py](https://localhost:8080/#) in _map_single(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc, cache_only)
2354 writer.write_table(batch)
2355 else:
-> 2356 writer.write_batch(batch)
2357 if update_data and writer is not None:
2358 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in write_batch(self, batch_examples, writer_batch_size)
505 col_try_type = try_features[col] if try_features is not None and col in try_features else None
506 typed_sequence = OptimizedTypedSequence(batch_examples[col], type=col_type, try_type=col_try_type, col=col)
--> 507 arrays.append(pa.array(typed_sequence))
508 inferred_features[col] = typed_sequence.get_inferred_type()
509 schema = inferred_features.arrow_schema if self.pa_writer is None else self.schema
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._handle_arrow_array_protocol()
[/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py](https://localhost:8080/#) in __arrow_array__(self, type)
175 storage = list_of_np_array_to_pyarrow_listarray(data, type=pa_type.value_type)
176 else:
--> 177 storage = pa.array(data, pa_type.storage_dtype)
178 return pa.ExtensionArray.from_storage(pa_type, storage)
179
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib.array()
/usr/local/lib/python3.7/dist-packages/pyarrow/array.pxi in pyarrow.lib._sequence_to_array()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.pyarrow_internal_check_status()
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Can only convert 1-dimensional array values
```
**Describe the solution you'd like**
No error in the second scenario and an identical result to the following snippets.
**Describe alternatives you've considered**
There are other alternatives that work such as:
```python
def prepare_dataset_3D_bis(batch):
batch["pixel_values"] = [[data_map[index].tolist()] for index in batch["id"]]
return batch
ds_3D_bis = ds.map(
prepare_dataset_3D_bis,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")})
)
```
or
```python
def prepare_dataset_3D_ter(batch):
batch["pixel_values"] = [data_map[index][np.newaxis, :, :] for index in batch["id"]]
return batch
ds_3D_ter = ds.map(
prepare_dataset_3D_ter,
batched=True,
remove_columns=ds.column_names,
features=features.Features({"pixel_values": features.Array3D(shape=(1, 2, 3), dtype="float32")})
)
```
But both solutions require the user to be aware that `data_map[index]` is an `np.array` type.
cc @lhoestq as we discuss this offline :smile: | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4191/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4191/timeline | null | completed | null | null | false | [
"Hi @SaulLu, thanks for your proposal.\r\n\r\nJust I got a bit confused about the dimensions...\r\n- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1\r\n- However, you give an example of creating an `Array2D` from arrays of dimension 2:\r\n - the values of `da... |
https://api.github.com/repos/huggingface/datasets/issues/4258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4258/comments | https://api.github.com/repos/huggingface/datasets/issues/4258/events | https://github.com/huggingface/datasets/pull/4258 | 1,221,637,727 | PR_kwDODunzps43Gstg | 4,258 | Fix/start token mask issue and update documentation | [] | closed | false | null | 2 | 2022-04-29T22:42:44Z | 2022-05-02T16:33:20Z | 2022-05-02T16:26:12Z | null | This pr fixes a couple bugs:
1) the perplexity was calculated with a 0 in the attention mask for the start token, which was causing high perplexity scores that were not correct
2) the documentation was not updated | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4258/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4258.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4258",
"merged_at": "2022-05-02T16:26:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4258.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4258"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Good catch ! Thanks :)\r\n> \r\n> Next time can you describe your fix in the Pull Request description please ?\r\n\r\nThanks. Also whoops, sorry about not being very descriptive. I updated the pull request description, and will kee... |
https://api.github.com/repos/huggingface/datasets/issues/1965 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1965/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1965/comments | https://api.github.com/repos/huggingface/datasets/issues/1965/events | https://github.com/huggingface/datasets/issues/1965 | 818,833,460 | MDU6SXNzdWU4MTg4MzM0NjA= | 1,965 | Can we parallelized the add_faiss_index process over dataset shards ? | [] | closed | false | null | 3 | 2021-03-01T12:47:34Z | 2021-03-04T19:40:56Z | 2021-03-04T19:40:42Z | null | I am thinking of making the **add_faiss_index** process faster. What if we run the add_faiss_index process on separate dataset shards and then combine them before (dataset.concatenate) saving the faiss.index file ?
I feel theoretically this will reduce the accuracy of retrieval since it affects the indexing process.
@lhoestq
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1965/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1965/timeline | null | completed | null | null | false | [
"Hi !\r\nAs far as I know not all faiss indexes can be computed in parallel and then merged. \r\nFor example [here](https://github.com/facebookresearch/faiss/wiki/Special-operations-on-indexes#splitting-and-merging-indexes) is is mentioned that only IndexIVF indexes can be merged.\r\nMoreover faiss already works us... |
https://api.github.com/repos/huggingface/datasets/issues/281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/281/comments | https://api.github.com/repos/huggingface/datasets/issues/281/events | https://github.com/huggingface/datasets/issues/281 | 641,067,856 | MDU6SXNzdWU2NDEwNjc4NTY= | 281 | Private/sensitive data | [] | closed | false | null | 3 | 2020-06-18T09:47:27Z | 2020-06-20T13:15:12Z | 2020-06-20T13:15:12Z | null | Hi all,
Thanks for this fantastic library, it makes it very easy to do prototyping for NLP projects interchangeably between TF/Pytorch.
Unfortunately, there is data that cannot easily be shared publicly as it may contain sensitive information.
Is there support/a plan to support such data with NLP, e.g. by reading it from local sources?
Use case flow could look like this: use NLP to prototype an approach on similar, public data and apply the resulting prototype on sensitive/private data without the need to rethink data processing pipelines.
Many thanks for your responses ahead of time and kind regards,
MFreidank | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/281/timeline | null | completed | null | null | false | [
"Hi @MFreidank, you should already be able to load a dataset from local sources, indeed. (ping @lhoestq and @jplu)\r\n\r\nWe're also thinking about the ability to host private datasets on a hosted bucket with permission management, but that's further down the road.",
"Hi @MFreidank, it is possible to load a datas... |
https://api.github.com/repos/huggingface/datasets/issues/1907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1907/comments | https://api.github.com/repos/huggingface/datasets/issues/1907/events | https://github.com/huggingface/datasets/issues/1907 | 811,520,569 | MDU6SXNzdWU4MTE1MjA1Njk= | 1,907 | DBPedia14 Dataset Checksum bug? | [] | closed | false | null | 2 | 2021-02-18T22:25:48Z | 2021-02-22T23:22:05Z | 2021-02-22T23:22:04Z | null | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, in <module>
main()
File "./conditional_classification/basic_pipeline.py", line 128, in main
corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class,
File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data
datasets = load_dataset(self.name, split=dataset_split)
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset
builder_instance.download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare
self._download_and_prepare(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare
verify_checksums(
File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k']
```
I've seen this has happened before in other datasets as reported in #537.
I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days.
Can you please check if there's a problem with the checksums?
Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently?
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1907/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1907/timeline | null | completed | null | null | false | [
"Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe er... |
https://api.github.com/repos/huggingface/datasets/issues/1313 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1313/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1313/comments | https://api.github.com/repos/huggingface/datasets/issues/1313/events | https://github.com/huggingface/datasets/pull/1313 | 759,536,512 | MDExOlB1bGxSZXF1ZXN0NTM0NTI1NjE3 | 1,313 | Add HateSpeech Corpus for Polish | [] | closed | false | null | 3 | 2020-12-08T15:23:53Z | 2020-12-16T16:48:45Z | 2020-12-16T16:48:45Z | null | This PR adds a HateSpeech Corpus for Polish, containing offensive language examples.
- **Homepage:** http://zil.ipipan.waw.pl/HateSpeech
- **Paper:** http://www.qualitativesociologyreview.org/PL/Volume38/PSJ_13_2_Troszynski_Wawer.pdf | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1313/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1313/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1313.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1313",
"merged_at": "2020-12-16T16:48:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1313.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1313"
} | true | [
"@lhoestq Do you think using the ClassLabel is correct if we don't know the meaning of them?",
"Once we find out the meanings we can still add them to the dataset card",
"Feel free to ping me when the PR is ready for the final review"
] |
https://api.github.com/repos/huggingface/datasets/issues/604 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/604/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/604/comments | https://api.github.com/repos/huggingface/datasets/issues/604/events | https://github.com/huggingface/datasets/pull/604 | 697,774,581 | MDExOlB1bGxSZXF1ZXN0NDgzNjgxNTc0 | 604 | Update bucket prefix | [] | closed | false | null | 0 | 2020-09-10T11:01:13Z | 2020-09-10T12:45:33Z | 2020-09-10T12:45:32Z | null | cc @julien-c | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/604/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/604/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/604.diff",
"html_url": "https://github.com/huggingface/datasets/pull/604",
"merged_at": "2020-09-10T12:45:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/604.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/604"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1077 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1077/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1077/comments | https://api.github.com/repos/huggingface/datasets/issues/1077/events | https://github.com/huggingface/datasets/pull/1077 | 756,617,964 | MDExOlB1bGxSZXF1ZXN0NTMyMTM5ODMx | 1,077 | Added glucose dataset | [] | closed | false | null | 0 | 2020-12-03T21:49:01Z | 2020-12-04T09:55:53Z | 2020-12-04T09:55:52Z | null | This PR adds the [Glucose](https://github.com/ElementalCognition/glucose) dataset. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1077/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1077/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1077.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1077",
"merged_at": "2020-12-04T09:55:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1077.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1077"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5036 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5036/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5036/comments | https://api.github.com/repos/huggingface/datasets/issues/5036/events | https://github.com/huggingface/datasets/pull/5036 | 1,389,094,075 | PR_kwDODunzps4_w8Bs | 5,036 | Add oversampling strategy iterable datasets interleave | [] | closed | false | null | 1 | 2022-09-28T10:10:23Z | 2022-09-30T12:30:48Z | 2022-09-30T12:28:23Z | null | Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
In order to be consistent and also to align with the `Dataset` behavior, please note that the behavior of the default strategy (`first_exhausted`) has been changed. Namely, it really stops when a dataset is out of samples whereas it used to stop when receiving the `StopIteration` error.
To give an example of the last note, with the following snippet:
```
>>> from tests.test_iterable_dataset import *
>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [0, 1, 2]])), {}))
>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [10, 11, 12, 13]])), {}))
>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {"a": i}) for i in [20, 21, 22, 23, 24]])), {}))
>>> dataset = interleave_datasets([d1, d2, d3])
>>> [x["a"] for x in dataset]
```
The result here will then be `[10, 0, 11, 1, 2]` instead of `[10, 0, 11, 1, 2, 20, 12, 13]`.
I modified the behavior because I found it to be consistent with the under/oversampling approach and because it unified the undersampling and oversampling code, but I stay open to any suggestions.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5036/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5036/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"merged_at": "2022-09-30T12:28:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/1025 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1025/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1025/comments | https://api.github.com/repos/huggingface/datasets/issues/1025/events | https://github.com/huggingface/datasets/pull/1025 | 755,673,371 | MDExOlB1bGxSZXF1ZXN0NTMxMzQxNjE5 | 1,025 | Add Sesotho Ner | [] | closed | false | null | 4 | 2020-12-02T23:00:15Z | 2020-12-16T16:27:03Z | 2020-12-16T16:27:02Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1025/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1025/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1025.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1025",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1025.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1025"
} | true | [
"looks like this PR include changes to other files (sepedi)\r\ncould you try to only include the files related to the addition of sesotho ner ?",
"I think i need to clean up my local repo. I am committing everything a fresh after sepedi",
"Feel free to ping me when yuo have a clean PR and it's ready to review :... | |
https://api.github.com/repos/huggingface/datasets/issues/2188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2188/comments | https://api.github.com/repos/huggingface/datasets/issues/2188/events | https://github.com/huggingface/datasets/issues/2188 | 853,044,166 | MDU6SXNzdWU4NTMwNDQxNjY= | 2,188 | Duplicate data in Timit dataset | [] | closed | false | null | 2 | 2021-04-08T04:21:54Z | 2021-04-08T12:13:19Z | 2021-04-08T12:13:19Z | null | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of refusal be useful?
...
...
Would such an act of refusal be useful? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2188/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"
] |
https://api.github.com/repos/huggingface/datasets/issues/4733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4733/comments | https://api.github.com/repos/huggingface/datasets/issues/4733/events | https://github.com/huggingface/datasets/issues/4733 | 1,314,479,616 | I_kwDODunzps5OWV4A | 4,733 | rouge metric | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-07-22T07:06:51Z | 2022-07-22T09:08:02Z | 2022-07-22T09:05:35Z | null | ## Describe the bug
A clear and concise description of what the bug is.
Loading Rouge metric gives error after latest rouge-score==0.0.7 release.
Downgrading rougemetric==0.0.4 works fine.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
```
## Expected results
A clear and concise description of the expected results.
from rouge_score import rouge_scorer, scoring
should run
## Actual results
Specify the actual results or traceback.
File "/root/.cache/huggingface/modules/datasets_modules/metrics/rouge/0ffdb60f436bdb8884d5e4d608d53dbe108e82dac4f494a66f80ef3f647c104f/rouge.py", line 21, in <module>
from rouge_score import rouge_scorer, scoring
ImportError: cannot import name 'rouge_scorer' from 'rouge_score' (unknown location)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux
- Python version:3.9
- PyArrow version:
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4733/timeline | null | completed | null | null | false | [
"Fixed by:\r\n- #4735"
] |
https://api.github.com/repos/huggingface/datasets/issues/744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/744/comments | https://api.github.com/repos/huggingface/datasets/issues/744/events | https://github.com/huggingface/datasets/issues/744 | 724,918,448 | MDU6SXNzdWU3MjQ5MTg0NDg= | 744 | Dataset Explorer Doesn't Work for squad_es and squad_it | [
{
"color": "94203D",
"default": false,
"description": "",
"id": 2107841032,
"name": "nlp-viewer",
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer"
}
] | closed | false | null | 1 | 2020-10-19T19:34:12Z | 2020-10-26T16:36:17Z | 2020-10-26T16:36:17Z | null | https://huggingface.co/nlp/viewer/?dataset=squad_es
https://huggingface.co/nlp/viewer/?dataset=squad_it
Both pages show "OSError: [Errno 28] No space left on device". | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/744/timeline | null | completed | null | null | false | [
"Oups wrong click.\r\nThis one is for you @srush"
] |
https://api.github.com/repos/huggingface/datasets/issues/2910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2910/comments | https://api.github.com/repos/huggingface/datasets/issues/2910/events | https://github.com/huggingface/datasets/pull/2910 | 996,149,632 | PR_kwDODunzps4rvL9N | 2,910 | feat: 🎸 pass additional arguments to get private configs + info | [] | closed | false | null | 1 | 2021-09-14T15:24:19Z | 2021-09-15T16:19:09Z | 2021-09-15T16:19:06Z | null | `use_auth_token` can now be passed to the functions to get the configs
or infos of private datasets on the hub | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2910/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2910/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2910.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2910",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2910.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2910"
} | true | [
"Included in https://github.com/huggingface/datasets/pull/2906"
] |
https://api.github.com/repos/huggingface/datasets/issues/5527 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5527/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5527/comments | https://api.github.com/repos/huggingface/datasets/issues/5527/events | https://github.com/huggingface/datasets/pull/5527 | 1,581,228,531 | PR_kwDODunzps5JysSM | 5,527 | Fix benchmarks CI - pin protobuf | [] | closed | false | null | 5 | 2023-02-12T11:51:25Z | 2023-02-13T10:29:03Z | 2023-02-13T09:24:16Z | null | fix https://github.com/huggingface/datasets/actions/runs/4156059127/jobs/7189576331 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5527/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5527/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5527.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5527",
"merged_at": "2023-02-13T09:24:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5527.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5527"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/4574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4574/comments | https://api.github.com/repos/huggingface/datasets/issues/4574/events | https://github.com/huggingface/datasets/pull/4574 | 1,285,380,616 | PR_kwDODunzps46ZOpZ | 4,574 | Support streaming mlsum dataset | [] | closed | false | null | 7 | 2022-06-27T07:37:03Z | 2022-07-21T13:37:30Z | 2022-07-21T12:40:00Z | null | Support streaming mlsum dataset.
This PR:
- pins `fsspec` min version with fixed BlockSizeError: `fsspec[http]>=2021.11.1`
- https://github.com/fsspec/filesystem_spec/pull/830
- unpins `s3fs==2021.08.1` to align it with `fsspec` requirement: `s3fs>=2021.11.1`
> s3fs 2021.8.1 requires fsspec==2021.08.1
- see discussion: https://github.com/huggingface/datasets/pull/2858/files#r700027326
- updates the following requirements to be compatible with the previous ones and one with each other:
- `aiobotocore==1.4.2` to `aiobotocore>=2.0.1` (required by s3fs>=2021.11.1)
- `boto3==1.17.106` to `boto3>=1.19.8` (to be compatible with aiobotocore>=2.0.1)
- `botocore==1.20.106` to `botocore>=1.22.8` (to be compatible with aiobotocore and boto3)
Fix #4572. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4574/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4574.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4574",
"merged_at": "2022-07-21T12:40:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4574.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4574"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"After unpinning `s3fs` and pinning `fsspec[http]>=2021.11.1`, the CI installs\r\n- `fsspec-2022.1.0`\r\n- `s3fs-0.5.1`\r\n\r\nand raises the following error:\r\n```\r\n ImportError while loading conftest '/home/runner/work/datasets/d... |
https://api.github.com/repos/huggingface/datasets/issues/1859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1859/comments | https://api.github.com/repos/huggingface/datasets/issues/1859/events | https://github.com/huggingface/datasets/issues/1859 | 805,479,025 | MDU6SXNzdWU4MDU0NzkwMjU= | 1,859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | [] | closed | false | null | 3 | 2021-02-10T12:41:00Z | 2021-02-10T18:32:12Z | 2021-02-10T18:17:47Z | null | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_available()` reports:
```
Cuda is available
cuda:0
```
Adding index, device=0 for GPU.
`dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)`
However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK.
```
def save(self, file: str):
"""Serialize the FaissIndex on disk"""
import faiss # noqa: F811
if (
hasattr(self.faiss_index, "device")
and self.faiss_index.device is not None
and self.faiss_index.device > -1
):
index = faiss.index_gpu_to_cpu(self.faiss_index)
else:
index = self.faiss_index
faiss.write_index(index, file)
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1859/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1859/timeline | null | completed | null | null | false | [
"Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR",
"I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next... |
https://api.github.com/repos/huggingface/datasets/issues/801 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/801/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/801/comments | https://api.github.com/repos/huggingface/datasets/issues/801/events | https://github.com/huggingface/datasets/issues/801 | 735,790,876 | MDU6SXNzdWU3MzU3OTA4NzY= | 801 | How to join two datasets? | [] | closed | false | null | 3 | 2020-11-04T03:53:11Z | 2020-12-23T14:02:58Z | 2020-12-23T14:02:58Z | null | Hi,
I'm wondering if it's possible to join two (preprocessed) datasets with the same number of rows but different labels?
I'm currently trying to create paired sentences for BERT from `wikipedia/'20200501.en`, and I couldn't figure out a way to create a paired sentence using `.map()` where the second sentence is **not** the next sentence (i.e., from a different article) of the first sentence.
Thanks! | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/801/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/801/timeline | null | completed | null | null | false | [
"Hi this is also my question. thanks ",
"Hi ! Currently the only way to add new fields to a dataset is by using `.map` and picking items from the other dataset\r\n",
"Closing this one. Feel free to re-open if you have other questions about this issue.\r\n\r\nAlso linking another discussion about joining dataset... |
https://api.github.com/repos/huggingface/datasets/issues/3498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3498/comments | https://api.github.com/repos/huggingface/datasets/issues/3498/events | https://github.com/huggingface/datasets/pull/3498 | 1,090,096,332 | PR_kwDODunzps4wWL5U | 3,498 | update `pretty_name` for first 200 datasets | [] | closed | false | null | 0 | 2021-12-28T19:50:07Z | 2022-07-10T14:36:53Z | 2022-01-05T16:38:21Z | null | I made a script some time back to fetch `pretty_names` from `papers_with_code` dataset along with some other rules incase that dataset wasn't available on `papers_with_code`. Updating them in the `README` of `datasets`. Took only the first 200 datasets into consideration and after some eyeballing, most of them were looking good to me! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3498/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3498/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3498.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3498",
"merged_at": "2022-01-05T16:38:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3498.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3498"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1552 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1552/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1552/comments | https://api.github.com/repos/huggingface/datasets/issues/1552/events | https://github.com/huggingface/datasets/pull/1552 | 765,664,411 | MDExOlB1bGxSZXF1ZXN0NTM5MDI2MzAx | 1,552 | Added OPUS ParaCrawl | [] | closed | false | null | 6 | 2020-12-13T21:44:29Z | 2020-12-21T09:50:26Z | 2020-12-21T09:50:25Z | null | Dataset : http://opus.nlpl.eu/ParaCrawl.php | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1552/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1552/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1552.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1552",
"merged_at": "2020-12-21T09:50:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1552.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1552"
} | true | [
"@lhoestq I saw some common changes you made on the other PR's (Similar Opus Datasets). I fixed those changes here. Can you please review it once ? \r\nThanks.",
"@rkc007 @lhoestq just noticed a dataset named para_crawl has been added a long time ago: #91.",
"They're not exactly the same so it's ok to have both... |
https://api.github.com/repos/huggingface/datasets/issues/1480 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1480/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1480/comments | https://api.github.com/repos/huggingface/datasets/issues/1480/events | https://github.com/huggingface/datasets/pull/1480 | 762,530,805 | MDExOlB1bGxSZXF1ZXN0NTM3MDY1NDMx | 1,480 | Adding the Mac-Morpho dataset | [] | closed | false | null | 0 | 2020-12-11T16:01:38Z | 2020-12-21T10:03:37Z | 2020-12-21T10:03:37Z | null | Adding the Mac-Morpho dataset, a Portuguese language dataset for Part-of-speech tagging tasks | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1480/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1480/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1480.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1480",
"merged_at": "2020-12-21T10:03:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1480.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1480"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1121/comments | https://api.github.com/repos/huggingface/datasets/issues/1121/events | https://github.com/huggingface/datasets/pull/1121 | 757,169,944 | MDExOlB1bGxSZXF1ZXN0NTMyNTkwNjY2 | 1,121 | adding cdt dataset | [] | closed | false | null | 0 | 2020-12-04T15:04:33Z | 2020-12-04T15:16:49Z | 2020-12-04T15:16:49Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1121/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1121/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1121.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1121",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1121.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1121"
} | true | [] | |
https://api.github.com/repos/huggingface/datasets/issues/4718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4718/comments | https://api.github.com/repos/huggingface/datasets/issues/4718/events | https://github.com/huggingface/datasets/pull/4718 | 1,309,520,453 | PR_kwDODunzps47prWR | 4,718 | Make Extractor accept Path as input | [] | closed | false | null | 1 | 2022-07-19T13:25:06Z | 2022-07-22T13:42:27Z | 2022-07-22T13:29:43Z | null | This PR:
- Makes `Extractor` accept instance of `Path` as input
- Removes unnecessary castings of `Path` to `str` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4718/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4718/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4718.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4718",
"merged_at": "2022-07-22T13:29:43Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4718.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4718"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2320 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2320/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2320/comments | https://api.github.com/repos/huggingface/datasets/issues/2320/events | https://github.com/huggingface/datasets/pull/2320 | 876,257,026 | MDExOlB1bGxSZXF1ZXN0NjMwNDM5NjI5 | 2,320 | Set default name in init_dynamic_modules | [] | closed | false | null | 0 | 2021-05-05T09:30:03Z | 2021-05-06T07:57:54Z | 2021-05-06T07:57:54Z | null | Set default value for the name of dynamic modules.
Close #2318. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2320/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2320/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2320.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2320",
"merged_at": "2021-05-06T07:57:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2320.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2320"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3238/comments | https://api.github.com/repos/huggingface/datasets/issues/3238/events | https://github.com/huggingface/datasets/issues/3238 | 1,048,226,086 | I_kwDODunzps4-eqkm | 3,238 | Reuters21578 Couldn't reach | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 2 | 2021-11-09T06:08:56Z | 2021-11-11T00:02:57Z | 2021-11-11T00:02:57Z | null | ``## Adding a Dataset
- **Name:** *Reuters21578*
- **Description:** *ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz*
- **Data:** *https://huggingface.co/datasets/reuters21578*
`from datasets import load_dataset`
`dataset = load_dataset("reuters21578", 'ModLewis')`
ConnectionError: Couldn't reach https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz
And I try to request the link as follow:
`import requests`
`requests.head('https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.tar.gz')`
SSLError: HTTPSConnectionPool(host='kdd.ics.uci.edu', port=443): Max retries exceeded with url: /databases/reuters21578/reuters21578.tar.gz (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
This problem likes #575
What should I do ?
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3238/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3238/timeline | null | completed | null | null | false | [
"Hi ! The URL works fine on my side today, could you try again ?",
"thank you @lhoestq \r\nit works"
] |
https://api.github.com/repos/huggingface/datasets/issues/2269 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2269/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2269/comments | https://api.github.com/repos/huggingface/datasets/issues/2269/events | https://github.com/huggingface/datasets/pull/2269 | 868,878,468 | MDExOlB1bGxSZXF1ZXN0NjI0MzMwNDA3 | 2,269 | Fix query table with iterable | [] | closed | false | null | 0 | 2021-04-27T13:59:38Z | 2021-04-27T14:21:57Z | 2021-04-27T14:21:56Z | null | The benchmark runs are failing on master because it tries to use an iterable to query the dataset.
However there's currently an issue caused by the use of `np.array` instead of `np.fromiter` on the iterable.
This PR fixes it | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2269/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2269/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2269.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2269",
"merged_at": "2021-04-27T14:21:56Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2269.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2269"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1754/comments | https://api.github.com/repos/huggingface/datasets/issues/1754/events | https://github.com/huggingface/datasets/pull/1754 | 789,881,730 | MDExOlB1bGxSZXF1ZXN0NTU4MTU5NjEw | 1,754 | Use a config id in the cache directory names for custom configs | [] | closed | false | null | 0 | 2021-01-20T11:11:00Z | 2021-01-25T09:12:07Z | 2021-01-25T09:12:06Z | null | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from datasets import load_dataset
mnli = load_dataset("glue", "mnli")
mnli_custom = load_dataset("glue", "mnli", label_classes=["contradiction", "entailment", "neutral"])
```
I fixed that by extending the cache directory definition of a dataset that is being generated.
Instead of using the config name in the cache directory name, I switched to using a `config_id`.
By default it is equal to the config name.
However the name of a config is not sufficent to have a unique identifier for the dataset being generated since it doesn't take into account:
- the config kwargs that can be used to overwrite attributes
- the custom features used to write the dataset
- the data_files for json/text/csv/pandas datasets
Therefore the config id is just the config name with an optional suffix based on these.
In particular taking into account the config kwargs fixes the issue with the `label_classes` above.
I completed the current test cases by adding the case that was missing: overwriting an already existing config. | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1754/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1754.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1754",
"merged_at": "2021-01-25T09:12:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1754.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1754"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/279 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/279/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/279/comments | https://api.github.com/repos/huggingface/datasets/issues/279/events | https://github.com/huggingface/datasets/issues/279 | 640,611,692 | MDU6SXNzdWU2NDA2MTE2OTI= | 279 | Dataset Preprocessing Cache with .map() function not working as expected | [] | closed | false | null | 5 | 2020-06-17T17:17:21Z | 2021-07-06T21:43:28Z | 2021-04-18T23:43:49Z | null | I've been having issues with reproducibility when loading and processing datasets with the `.map` function. I was only able to resolve them by clearing all of the cache files on my system.
Is there a way to disable using the cache when processing a dataset? As I make minor processing changes on the same dataset, I want to be able to be certain the data is being re-processed rather than loaded from a cached file.
Could you also help me understand a bit more about how the caching functionality is used for pre-processing? E.g. how is it determined when to load from a cache vs. reprocess.
I was particularly having an issue where the correct dataset splits were loaded, but as soon as I applied the `.map()` function to each split independently, they somehow all exited this process having been converted to the test set.
Thanks! | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/279/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/279/timeline | null | completed | null | null | false | [
"When you're processing a dataset with `.map`, it checks whether it has already done this computation using a hash based on the function and the input (using some fancy serialization with `dill`). If you found that it doesn't work as expected in some cases, let us know !\r\n\r\nGiven that, you can still force to re... |
https://api.github.com/repos/huggingface/datasets/issues/4234 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4234/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4234/comments | https://api.github.com/repos/huggingface/datasets/issues/4234/events | https://github.com/huggingface/datasets/pull/4234 | 1,216,818,846 | PR_kwDODunzps422Mwn | 4,234 | Autoeval config | [] | closed | false | null | 15 | 2022-04-27T05:32:10Z | 2022-05-06T13:20:31Z | 2022-05-05T18:20:58Z | null | Added autoeval config to imdb as pilot | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4234/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4234/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4234.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4234",
"merged_at": "2022-05-05T18:20:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4234.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4234"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Related to: https://github.com/huggingface/autonlp-backend/issues/414 and https://github.com/huggingface/autonlp-backend/issues/424",
"The tests are failing due to the changed metadata:\r\n\r\n```\r\ngot an unexpected keyword argum... |
https://api.github.com/repos/huggingface/datasets/issues/954 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/954/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/954/comments | https://api.github.com/repos/huggingface/datasets/issues/954/events | https://github.com/huggingface/datasets/pull/954 | 754,362,012 | MDExOlB1bGxSZXF1ZXN0NTMwMjc1MDY4 | 954 | add prachathai67k | [] | closed | false | null | 3 | 2020-12-01T12:40:55Z | 2020-12-02T05:12:11Z | 2020-12-02T04:43:52Z | null | `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com
The prachathai-67k dataset was scraped from the news site Prachathai.
We filtered out those articles with less than 500 characters of body text, mostly images and cartoons.
It contains 67,889 articles wtih 12 curated tags from August 24, 2004 to November 15, 2018.
The dataset was originally scraped by @lukkiddd and cleaned by @cstorm125.
You can also see preliminary exploration at https://github.com/PyThaiNLP/prachathai-67k/blob/master/exploration.ipynb | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/954/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/954/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/954.diff",
"html_url": "https://github.com/huggingface/datasets/pull/954",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/954.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/954"
} | true | [
"Test failing for same issues as https://github.com/huggingface/datasets/pull/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py... |
https://api.github.com/repos/huggingface/datasets/issues/6017 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6017/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6017/comments | https://api.github.com/repos/huggingface/datasets/issues/6017/events | https://github.com/huggingface/datasets/issues/6017 | 1,799,309,132 | I_kwDODunzps5rP0dM | 6,017 | Switch to huggingface_hub's HfFileSystem | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2023-07-11T16:24:40Z | 2023-07-17T17:01:01Z | 2023-07-17T17:01:01Z | null | instead of the current datasets.filesystems.hffilesystem.HfFileSystem which can be slow in some cases
related to https://github.com/huggingface/datasets/issues/5846 and https://github.com/huggingface/datasets/pull/5919 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6017/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6017/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/4702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4702/comments | https://api.github.com/repos/huggingface/datasets/issues/4702/events | https://github.com/huggingface/datasets/issues/4702 | 1,307,793,811 | I_kwDODunzps5N81mT | 4,702 | Domain specific dataset discovery on the Hugging Face hub | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 9 | 2022-07-18T11:14:03Z | 2022-07-19T15:18:11Z | null | null | **Is your feature request related to a problem? Please describe.**
## The problem
The datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and tasks (currently with a bias towards textual data).
There are various ways of identifying datasets that may be relevant for a particular use case:
- searching
- various filters
Currently, however, there isn't an easy way to identify datasets belonging to a specific domain. For example, I want to browse machine learning datasets related to 'social science' or 'climate change research'.
The ability to identify datasets relating to a specific domain has come up in discussions around the [BigLA](https://github.com/bigscience-workshop/lam/) datasets hackathon https://github.com/bigscience-workshop/lam/discussions/31#discussioncomment-3123610. As part of the hackathon, we're currently collecting datasets related to Libraries, Archives and Museums and making them available via the hub. We currently do this under a Hugging Face organization (https://huggingface.co/biglam). However, going forward, I can see some of these datasets being migrated to sit under an organization that is the custodian of the dataset (for example, a national library the data was originally from). At this point, it becomes more difficult to quickly identify datasets from this domain without relying on search.
This is also related to some existing issues on Github related to metadata on the hub:
- https://github.com/huggingface/datasets/issues/3625
- https://github.com/huggingface/datasets/issues/3877
**Describe the solution you'd like**
### Some possible solutions that may help with this:
#### Enable domain tags (from a controlled vocabulary)
- This would add metadata field to the YAML for the domain a dataset relates to
- Advantages:
- the list is controlled, allowing it to be more easily integrated into the datasets tag app (https://huggingface.co/space/huggingface/datasets-tagging)
- the controlled vocabulary could align with an existing controlled vocabulary
- this additional metadata can be used to perform filtering by domain
- disadvantages
- choosing the best controlled vocab may be difficult
- there are many datasets that are likely to fit into the 'machine learning' domain (i.e. there is a long tail of datasets that aren't in more 'generic' machine learning domain
#### Enable topic tags (user-generated)
Enable 'free form' topic tags for datasets and models. This would be closer to GitHub's repository topics which can be chosen from a controlled list (https://github.com/topics/) but can also be more user/org specific. This could potentially be useful for organizations to also manage their own models and datasets as the number they hold in their org grows. For example, they may create 'topic tags' for a specific project, so it's clearer which datasets /models are related to that project.
#### Collections
This solution would likely be the biggest shift and may require significant changes in the hub fronted. Collections could work in several different ways but would include:
Users can curate particular datasets, models, spaces, etc., into a collection. For example, they may create a collection of 'historic newspapers suitable for training language models'. These collections would not be mutually exclusive, i.e. a dataset can belong to zero, one or many collections. Collections can also potentially be nested under other collections.
This is fairly common on other data reposotiores for example the following collections:
<img width="293" alt="Screenshot 2022-07-18 at 11 50 44" src="https://user-images.githubusercontent.com/8995957/179496445-963ed122-5e26-4574-96e8-41081bce3e2b.png">
all belong under a higher level collection (https://bl.iro.bl.uk/collections/353c908d-b495-4413-b047-87236d2573e3?locale=en).
There are different models one could use for how these collections could be created:
- only within an org
- for any dataset/model
- the owner or a dataset/model has to agree to be added to a collection
- a collection owner can have people suggest additions to their collection
- other models....
These collections could be thematic, related to particular training approaches, curate models with particular inference properties etc. Whilst some of these features may duplicate current/or future tag filters on the hub, they offer the advantage of being flexible and not having to predict what users will want to do upfront.
There is also potential for automating the creation of these collections based on existing metadata. For example, one could collect models trained on a collection of datasets so for example, if we had a collection of 'historic newspapers suitable for training language models' that contained 30 datasets, we could create another collection 'historic newspaper language models' that takes any model on the hub whose metadata says it used one or more of those 30 datasets.
There is also the option of exploring ML approaches to suggest models/datasets may be relevant to a particular collection.
This approach is likely to be quite difficult to implement well and would require significant thought. There is also likely to be a benefit in doing quite a bit of upfront work in curating useful collections to demonstrate the benefits of collections.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
It is possible to collate this information externally, i.e. one could link back to the relevant models/datasets from an external platform.
**Additional context**
Add any other context about the feature request here.
I'm cc'ing others involved in the BigLAM hackathon who may also have thoughts @cakiki @clancyoftheoverflow @albertvillanova | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4702/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4702/timeline | null | null | null | null | false | [
"Hi! I added a link to this issue in our internal request for adding keywords/topics to the Hub, which is identical to the `topic tags` solution. The `collections` solution seems too complex (as you point out). Regarding the `domain tags` solution, we primarily focus on machine learning, so I'm not sure if it's a g... |
https://api.github.com/repos/huggingface/datasets/issues/3490 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3490/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3490/comments | https://api.github.com/repos/huggingface/datasets/issues/3490/events | https://github.com/huggingface/datasets/issues/3490 | 1,089,730,181 | I_kwDODunzps5A8_aF | 3,490 | Does datasets support load text from HDFS? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 1 | 2021-12-28T08:56:02Z | 2022-02-14T14:00:51Z | null | null | The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine,
so I wander does datasets support read data from hdfs? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3490/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3490/timeline | null | null | null | null | false | [
"Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5332 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5332/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5332/comments | https://api.github.com/repos/huggingface/datasets/issues/5332/events | https://github.com/huggingface/datasets/issues/5332 | 1,476,513,072 | I_kwDODunzps5YAc0w | 5,332 | Passing numpy array to ClassLabel names causes ValueError | [] | closed | false | null | 5 | 2022-12-05T12:59:03Z | 2022-12-22T16:32:50Z | 2022-12-22T16:32:50Z | null | ### Describe the bug
If a numpy array is passed to the names argument of ClassLabel, creating a dataset with those features causes an error.
### Steps to reproduce the bug
https://colab.research.google.com/drive/1cV_es1PWZiEuus17n-2C-w0KEoEZ68IX
TLDR:
If I define my classes as:
```
my_classes = np.array(['one', 'two', 'three'])
```
Then this errors:
```py
features = Features({'value': Value('string'), 'label': ClassLabel(names=my_classes)})
dataset = Dataset.from_list(my_data, features=features)
```
```
ValueError Traceback (most recent call last)
[<ipython-input-8-a8a9d53ec82f>](https://localhost:8080/#) in <module>
----> 1 dataset = Dataset.from_list(my_data, features=features)
11 frames
[/usr/local/lib/python3.8/dist-packages/datasets/utils/py_utils.py](https://localhost:8080/#) in _asdict_inner(obj)
183 for f in fields(obj):
184 value = _asdict_inner(getattr(obj, f.name))
--> 185 if not f.init or value != f.default or f.metadata.get("include_in_asdict_even_if_is_default", False):
186 result[f.name] = value
187 return result
ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
```
But this works:
```
features2 = Features({'value': Value('string'), 'label': ClassLabel(names=list(my_classes))})
dataset2 = Dataset.from_list(my_data, features=features2)
```
### Expected behavior
If I provide a numpy array of class names, I would expect either an error that the names list is the wrong type, or for it to be cast internally.
### Environment info
- `datasets` version: 2.7.1
- Platform: Linux-5.15.0-56-generic-x86_64-with-glibc2.10
- Python version: 3.8.15
- PyArrow version: 10.0.1
- Pandas version: 1.5.2
Additionally:
- Numpy version: 1.23.5
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5332/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5332/timeline | null | completed | null | null | false | [
"Should `datasets` allow `ClassLabel` input parameter to be an `np.array` even though internally we need to cast it to a Python list? @lhoestq @mariosasko ",
"Hi! No, I don't think so. The `names` parameter is [annotated](https://github.com/huggingface/datasets/blob/582236640b9109988e5f7a16a8353696ffa09a16/src/d... |
https://api.github.com/repos/huggingface/datasets/issues/2741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2741/comments | https://api.github.com/repos/huggingface/datasets/issues/2741/events | https://github.com/huggingface/datasets/issues/2741 | 957,979,559 | MDU6SXNzdWU5NTc5Nzk1NTk= | 2,741 | Add Hypersim dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | open | false | null | 0 | 2021-08-02T10:06:50Z | 2021-12-08T12:06:51Z | null | null | ## Adding a Dataset
- **Name:** Hypersim
- **Description:** photorealistic synthetic dataset for holistic indoor scene understanding
- **Paper:** *link to the dataset paper if available*
- **Data:** https://github.com/apple/ml-hypersim
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2741/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/3811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3811/comments | https://api.github.com/repos/huggingface/datasets/issues/3811/events | https://github.com/huggingface/datasets/pull/3811 | 1,158,234,407 | PR_kwDODunzps4z4dHS | 3,811 | Update dev doc gh workflows | [] | closed | false | null | 0 | 2022-03-03T10:29:01Z | 2022-10-04T09:35:54Z | 2022-03-03T10:45:54Z | null | Reflect changes from https://github.com/huggingface/transformers/pull/15891 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3811/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3811.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3811",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3811.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3811"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4792/comments | https://api.github.com/repos/huggingface/datasets/issues/4792/events | https://github.com/huggingface/datasets/issues/4792 | 1,328,593,929 | I_kwDODunzps5PMLwJ | 4,792 | Add DocVQA | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | open | false | null | 1 | 2022-08-04T13:07:26Z | 2022-08-08T05:31:20Z | null | null | ## Adding a Dataset
- **Name:** DocVQA
- **Description:** Document Visual Question Answering (DocVQA) seeks to inspire a “purpose-driven” point of view in Document Analysis and Recognition research, where the document content is extracted and used to respond to high-level tasks defined by the human consumers of this information.
- **Paper:** https://arxiv.org/abs/2007.00398
- **Data:** https://www.docvqa.org/datasets/docvqa
- **Motivation:** Models like LayoutLM and Donut in the Transformers library are fine-tuned on DocVQA. Would be very handy to directly load this dataset from the hub.
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4792/timeline | null | null | null | null | false | [
"Thanks for proposing, @NielsRogge.\r\n\r\nPlease, note this dataset requires registering in their website and their Terms and Conditions state we cannot distribute their URL:\r\n```\r\n1. You will NOT distribute the download URLs\r\n...\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/160/comments | https://api.github.com/repos/huggingface/datasets/issues/160/events | https://github.com/huggingface/datasets/issues/160 | 620,448,236 | MDU6SXNzdWU2MjA0NDgyMzY= | 160 | caching in map causes same result to be returned for train, validation and test | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 7 | 2020-05-18T19:22:03Z | 2020-05-18T21:36:20Z | 2020-05-18T21:36:20Z | null | hello,
I am working on a program that uses the `nlp` library with the `SST2` dataset.
The rough outline of the program is:
```
import nlp as nlp_datasets
...
parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')
...
dataset = nlp_datasets.load_dataset(*args.dataset)
...
# Create feature vocabs
vocabs = create_vocabs(dataset.values(), vectorizers)
...
# Create a function to vectorize based on vectorizers and vocabs:
print('TS', train_set.num_rows)
print('VS', valid_set.num_rows)
print('ES', test_set.num_rows)
# factory method to create a `convert_to_features` function based on vocabs
convert_to_features = create_featurizer(vectorizers, vocabs)
train_set = train_set.map(convert_to_features, batched=True)
train_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
train_loader = torch.utils.data.DataLoader(train_set, batch_size=args.batchsz)
valid_set = valid_set.map(convert_to_features, batched=True)
valid_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
valid_loader = torch.utils.data.DataLoader(valid_set, batch_size=args.batchsz)
test_set = test_set.map(convert_to_features, batched=True)
test_set.set_format(type='torch', columns=list(vectorizers.keys()) + ['y', 'lengths'])
test_loader = torch.utils.data.DataLoader(test_set, batch_size=args.batchsz)
print('TS', train_set.num_rows)
print('VS', valid_set.num_rows)
print('ES', test_set.num_rows)
```
Im not sure if Im using it incorrectly, but the results are not what I expect. Namely, the `.map()` seems to grab the datset from the cache and then loses track of what the specific dataset is, instead using my training data for all datasets:
```
TS 67349
VS 872
ES 1821
TS 67349
VS 67349
ES 67349
```
The behavior changes if I turn off the caching but then the results fail:
```
train_set = train_set.map(convert_to_features, batched=True, load_from_cache_file=False)
...
valid_set = valid_set.map(convert_to_features, batched=True, load_from_cache_file=False)
...
test_set = test_set.map(convert_to_features, batched=True, load_from_cache_file=False)
```
Now I get the right set of features back...
```
TS 67349
VS 872
ES 1821
100%|██████████| 68/68 [00:00<00:00, 92.78it/s]
100%|██████████| 1/1 [00:00<00:00, 75.47it/s]
0%| | 0/2 [00:00<?, ?it/s]TS 67349
VS 872
ES 1821
100%|██████████| 2/2 [00:00<00:00, 77.19it/s]
```
but I think its losing track of the original training set:
```
Traceback (most recent call last):
File "/home/dpressel/dev/work/baseline/api-examples/layers-classify-hf-datasets.py", line 148, in <module>
for x in train_loader:
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 345, in __next__
data = self._next_data()
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 338, in __getitem__
output_all_columns=self._output_all_columns,
File "/home/dpressel/anaconda3/lib/python3.7/site-packages/nlp/arrow_dataset.py", line 294, in _getitem
outputs = self._unnest(self._data.slice(key, 1).to_pydict())
File "pyarrow/table.pxi", line 1211, in pyarrow.lib.Table.slice
File "pyarrow/public-api.pxi", line 390, in pyarrow.lib.pyarrow_wrap_table
File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 3: In chunk 0: Invalid: Length spanned by list offsets (15859698) larger than values array (length 100000)
Process finished with exit code 1
```
The full-example program (minus the print stmts) is here:
https://github.com/dpressel/mead-baseline/pull/620/files
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/160/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/160/timeline | null | completed | null | null | false | [
"Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ",
"Hi, the full example was... |
https://api.github.com/repos/huggingface/datasets/issues/4990 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4990/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4990/comments | https://api.github.com/repos/huggingface/datasets/issues/4990/events | https://github.com/huggingface/datasets/issues/4990 | 1,378,120,806 | I_kwDODunzps5SJHRm | 4,990 | "no-token" is passed to `huggingface_hub` when token is `None` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 6 | 2022-09-19T15:14:40Z | 2022-09-30T09:16:00Z | 2022-09-30T09:16:00Z | null | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I feel that here it is working because we assume the token will never be validated.
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L753
https://github.com/huggingface/datasets/blob/5b23f58535f14cc4dd7649485bce1ccc836e7bca/src/datasets/load.py#L1121
## Expected results
Pass `token=None` to `huggingface_hub`.
## Actual results
`token="no-token"` is passed.
## Environment info
`huggingface_hub v0.10.0dev` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4990/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4990/timeline | null | completed | null | null | false | [
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @... |
https://api.github.com/repos/huggingface/datasets/issues/2312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2312/comments | https://api.github.com/repos/huggingface/datasets/issues/2312/events | https://github.com/huggingface/datasets/pull/2312 | 875,435,726 | MDExOlB1bGxSZXF1ZXN0NjI5Nzc4NjUz | 2,312 | Add rename_columnS method | [] | closed | false | null | 1 | 2021-05-04T12:57:53Z | 2021-05-04T13:43:13Z | 2021-05-04T13:43:12Z | null | Cherry-picked from #2255 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2312/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2312",
"merged_at": "2021-05-04T13:43:12Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2312"
} | true | [
"Merging then 😄 "
] |
https://api.github.com/repos/huggingface/datasets/issues/4595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4595/comments | https://api.github.com/repos/huggingface/datasets/issues/4595/events | https://github.com/huggingface/datasets/issues/4595 | 1,288,275,976 | I_kwDODunzps5MyYgI | 4,595 | Dataset Viewer issue with False positive PII redaction | [] | closed | false | null | 2 | 2022-06-29T07:15:57Z | 2022-06-29T08:29:41Z | 2022-06-29T08:27:49Z | null | ### Link
https://huggingface.co/datasets/cakiki/rosetta-code
### Description
Hello, I just noticed an entry being redacted that shouldn't have been:
`RootMeanSquare@Range[10]` is being displayed as `[email protected][10]`
### Owner
_No response_ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4595/timeline | null | completed | null | null | false | [
"The value is in the data, it's not an issue with the \"dataset-viewer\".\r\n\r\n<img width=\"1161\" alt=\"Capture d’écran 2022-06-29 à 10 25 51\" src=\"https://user-images.githubusercontent.com/1676121/176389325-4d2a9a7f-1583-45b8-aa7a-960ffaa6a36a.png\">\r\n\r\n Maybe open a PR: https://huggingface.co/datasets/... |
https://api.github.com/repos/huggingface/datasets/issues/1312 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1312/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1312/comments | https://api.github.com/repos/huggingface/datasets/issues/1312/events | https://github.com/huggingface/datasets/pull/1312 | 759,532,626 | MDExOlB1bGxSZXF1ZXN0NTM0NTIyMzc1 | 1,312 | Jigsaw toxicity pred | [] | closed | false | null | 0 | 2020-12-08T15:19:14Z | 2020-12-11T12:11:32Z | 2020-12-11T12:11:32Z | null | Requires manually downloading data from Kaggle. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1312/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1312/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1312.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1312",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1312.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1312"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5755 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5755/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5755/comments | https://api.github.com/repos/huggingface/datasets/issues/5755/events | https://github.com/huggingface/datasets/issues/5755 | 1,669,048,438 | I_kwDODunzps5je6h2 | 5,755 | ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils' | [] | closed | false | null | 1 | 2023-04-14T23:28:54Z | 2023-04-14T23:36:19Z | 2023-04-14T23:36:19Z | null | ### Describe the bug
The module moved to new place?
### Steps to reproduce the bug
in the import step,
```python
from datasets.utils.deprecation_utils import DeprecatedEnum
```
error:
```
ImportError: cannot import name 'DeprecatedEnum' from 'datasets.utils.deprecation_utils'
```
### Expected behavior
import successfully
### Environment info
python==3.9.16
datasets==1.18.3 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5755/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5755/timeline | null | completed | null | null | false | [
"update the version. fix"
] |
https://api.github.com/repos/huggingface/datasets/issues/1212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1212/comments | https://api.github.com/repos/huggingface/datasets/issues/1212/events | https://github.com/huggingface/datasets/pull/1212 | 757,978,795 | MDExOlB1bGxSZXF1ZXN0NTMzMjM1MTky | 1,212 | Add Sanskrit Classic texts in datasets | [] | closed | false | null | 1 | 2020-12-06T17:31:31Z | 2020-12-07T19:04:08Z | 2020-12-07T19:04:08Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1212/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1212/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1212.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1212",
"merged_at": "2020-12-07T19:04:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1212.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1212"
} | true | [
"merging since the CI is fixed on master"
] | |
https://api.github.com/repos/huggingface/datasets/issues/4074 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4074/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4074/comments | https://api.github.com/repos/huggingface/datasets/issues/4074/events | https://github.com/huggingface/datasets/issues/4074 | 1,188,449,142 | I_kwDODunzps5G1kt2 | 4,074 | Error in google/xtreme_s dataset card | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
},
{
"color": "2edb... | closed | false | null | 1 | 2022-03-31T18:07:45Z | 2022-04-01T08:12:56Z | 2022-04-01T08:12:56Z | null | **Link:** https://huggingface.co/datasets/google/xtreme_s
Not a big deal but Hungarian is considered an Eastern European language, together with Serbian, Slovak, Slovenian (all correctly categorized; Slovenia is mostly to the West of Hungary, by the way).
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4074/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4074/timeline | null | completed | null | null | false | [
"Hi @wranai, thanks for reporting.\r\n\r\nPlease note that the information about language families and groups is taken form the original paper: [XTREME-S: Evaluating Cross-lingual Speech Representations](https://arxiv.org/abs/2203.10752).\r\n\r\nIf that information is wrong, feel free to contact the paper's authors... |
https://api.github.com/repos/huggingface/datasets/issues/3403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3403/comments | https://api.github.com/repos/huggingface/datasets/issues/3403/events | https://github.com/huggingface/datasets/issues/3403 | 1,073,622,120 | I_kwDODunzps4__ixo | 3,403 | Cannot import name 'maybe_sync' | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2021-12-07T17:57:59Z | 2021-12-17T07:00:35Z | 2021-12-17T07:00:35Z | null | ## Describe the bug
Cannot seem to import datasets when running run_summarizer.py script on a VM set up on ovhcloud
## Steps to reproduce the bug
```python
from datasets import load_dataset
```
## Expected results
No error
## Actual results
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.7/site-packages/datasets/__init__.py", line 34, in <module>
from .arrow_dataset import Dataset, concatenate_datasets
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 48, in <module>
from .arrow_writer import ArrowWriter, OptimizedTypedSequence
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_writer.py", line 27, in <module>
from .features import (
File "/opt/conda/lib/python3.7/site-packages/datasets/features/__init__.py", line 2, in <module>
from .audio import Audio
File "/opt/conda/lib/python3.7/site-packages/datasets/features/audio.py", line 8, in <module>
from ..utils.streaming_download_manager import xopen
File "/opt/conda/lib/python3.7/site-packages/datasets/utils/streaming_download_manager.py", line 16, in <module>
from ..filesystems import COMPRESSION_FILESYSTEMS
File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/__init__.py", line 13, in <module>
from .s3filesystem import S3FileSystem # noqa: F401
File "/opt/conda/lib/python3.7/site-packages/datasets/filesystems/s3filesystem.py", line 1, in <module>
import s3fs
File "/opt/conda/lib/python3.7/site-packages/s3fs/__init__.py", line 1, in <module>
from .core import S3FileSystem, S3File
File "/opt/conda/lib/python3.7/site-packages/s3fs/core.py", line 11, in <module>
from fsspec.asyn import AsyncFileSystem, sync, sync_wrapper, maybe_sync
ImportError: cannot import name 'maybe_sync' from 'fsspec.asyn' (/opt/conda/lib/python3.7/site-packages/fsspec/asyn.py)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.16.0
- Platform: OVH Cloud Tesla V100 Machine
- Python version: 3.7.9
- PyArrow version: 6.0.1
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3403/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3403/timeline | null | completed | null | null | false | [
"Hi ! Can you try updating `fsspec` ? The minimum version is `2021.05.0`",
"hey @lhoestq. I'm using `fsspec-2021.11.1` but still getting that error.",
"Maybe this discussion can help:\r\n\r\nhttps://github.com/fsspec/filesystem_spec/issues/597#issuecomment-958646964",
"Thanks @lhoestq. Downgrading `fsspec and... |
https://api.github.com/repos/huggingface/datasets/issues/3974 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3974/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3974/comments | https://api.github.com/repos/huggingface/datasets/issues/3974/events | https://github.com/huggingface/datasets/pull/3974 | 1,174,485,044 | PR_kwDODunzps40ssrA | 3,974 | Add XFUN dataset | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 8 | 2022-03-20T09:24:54Z | 2022-10-03T09:38:16Z | 2022-10-03T09:36:22Z | null | This PR adds XFUN dataset.
Home page and repository: https://github.com/doc-analysis/XFUND
Source code: https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/data/datasets/xfun.py | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3974/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3974/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3974.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3974",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3974.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3974"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Not sure how to generate dummy data.\r\n\r\nThe downloaded file structure is \r\n\r\n- document file paths\r\n - (a json file containing all documents info, document images folder)\r\n - (a json file containing all documents in... |
https://api.github.com/repos/huggingface/datasets/issues/2001 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2001/comments | https://api.github.com/repos/huggingface/datasets/issues/2001/events | https://github.com/huggingface/datasets/issues/2001 | 823,946,706 | MDU6SXNzdWU4MjM5NDY3MDY= | 2,001 | Empty evidence document ("provenance") in KILT ELI5 dataset | [] | closed | false | null | 1 | 2021-03-07T15:41:35Z | 2022-12-19T19:25:14Z | 2021-03-17T05:51:01Z | null | In the original KILT benchmark(https://github.com/facebookresearch/KILT),
all samples has its evidence document (i.e. wikipedia page id) for prediction.
For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this
`{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}`
However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance.
`{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]}
`
should i perform other procedure to obtain evidence documents? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2001/timeline | null | completed | null | null | false | [
"Why did you close this issue? How did you end up finding the evidence documents? I'm running into a similar issue with other KILT tasks."
] |
https://api.github.com/repos/huggingface/datasets/issues/4601 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4601/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4601/comments | https://api.github.com/repos/huggingface/datasets/issues/4601/events | https://github.com/huggingface/datasets/pull/4601 | 1,289,924,715 | PR_kwDODunzps46oWF8 | 4,601 | Upgrade pip in WIN CI | [] | closed | false | null | 2 | 2022-06-30T10:25:42Z | 2022-06-30T10:54:25Z | 2022-06-30T10:43:38Z | null | The windows CI is currently flaky: some dependencies like aiobotocore, multiprocess and seqeval sometimes fail to install.
In particular it seems that building the wheels fail. Here is an example of logs
```
Building wheel for seqeval (setup.py): started
Running command 'C:\tools\miniconda3\envs\py37\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"'; __file__='"'"'C:\\Users\\circleci\\AppData\\Local\\Temp\\pip-install-h55pfgbv\\seqeval_d6cdb9d23ff6490b98b6c4bcaecb516e\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6'
No parent package detected, impossible to derive `name`
running bdist_wheel
running build
running build_py
package init file 'seqeval\__init__.py' not found (or not a regular file)
package init file 'seqeval\metrics\__init__.py' not found (or not a regular file)
C:\tools\miniconda3\envs\py37\lib\site-packages\setuptools\command\install.py:37: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
setuptools.SetuptoolsDeprecationWarning,
installing to build\bdist.win-amd64\wheel
running install
running install_lib
warning: install_lib: 'build\lib' does not exist -- no Python modules to install
running install_egg_info
running egg_info
creating UNKNOWN.egg-info
writing UNKNOWN.egg-info\PKG-INFO
writing dependency_links to UNKNOWN.egg-info\dependency_links.txt
writing top-level names to UNKNOWN.egg-info\top_level.txt
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
reading manifest file 'UNKNOWN.egg-info\SOURCES.txt'
writing manifest file 'UNKNOWN.egg-info\SOURCES.txt'
Copying UNKNOWN.egg-info to build\bdist.win-amd64\wheel\.\UNKNOWN-0.0.0-py3.7.egg-info
running install_scripts
creating build\bdist.win-amd64\wheel\UNKNOWN-0.0.0.dist-info\WHEEL
creating 'C:\Users\circleci\AppData\Local\Temp\pip-wheel-x3cc8ym6\UNKNOWN-0.0.0-py3-none-any.whl' and adding 'build\bdist.win-amd64\wheel' to it
adding 'UNKNOWN-0.0.0.dist-info/METADATA'
adding 'UNKNOWN-0.0.0.dist-info/WHEEL'
adding 'UNKNOWN-0.0.0.dist-info/top_level.txt'
adding 'UNKNOWN-0.0.0.dist-info/RECORD'
removing build\bdist.win-amd64\wheel
Building wheel for seqeval (setup.py): finished with status 'done'
Created wheel for seqeval: filename=UNKNOWN-0.0.0-py3-none-any.whl size=963 sha256=67eb93a6e1ff4796c5882a13f9fa25bb0d3d103796e2525f9cecf3b2ef26d4b1
Stored in directory: c:\users\circleci\appdata\local\pip\cache\wheels\05\96\ee\7cac4e74f3b19e3158dce26a20a1c86b3533c43ec72a549fd7
WARNING: Built wheel for seqeval is invalid: Wheel has unexpected file name: expected 'seqeval', got 'UNKNOWN'
```
I tried to update pip and re-run the CI several times and I couldn't re-experience this issue for now, so I think upgrading pip may solve the issue | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4601/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4601/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4601.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4601",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4601.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4601"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"It failed terribly"
] |
https://api.github.com/repos/huggingface/datasets/issues/2157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2157/comments | https://api.github.com/repos/huggingface/datasets/issues/2157/events | https://github.com/huggingface/datasets/pull/2157 | 847,205,239 | MDExOlB1bGxSZXF1ZXN0NjA2MjM1NjUx | 2,157 | updated user permissions based on umask | [] | closed | false | null | 0 | 2021-03-31T19:38:29Z | 2021-04-06T07:19:19Z | 2021-04-06T07:19:19Z | null | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2157/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2157/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2157.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2157",
"merged_at": "2021-04-06T07:19:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2157.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2157"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2438 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2438/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2438/comments | https://api.github.com/repos/huggingface/datasets/issues/2438/events | https://github.com/huggingface/datasets/pull/2438 | 908,461,914 | MDExOlB1bGxSZXF1ZXN0NjU5MTQ5Njg0 | 2,438 | Fix NQ features loading: reorder fields of features to match nested fields order in arrow data | [] | closed | false | null | 0 | 2021-06-01T16:09:30Z | 2021-06-04T09:02:31Z | 2021-06-04T09:02:31Z | null | As mentioned in #2401, there is an issue when loading the features of `natural_questions` since the order of the nested fields in the features don't match. The order is important since it matters for the underlying arrow schema.
To fix that I re-order the features based on the arrow schema:
```python
inferred_features = Features.from_arrow_schema(arrow_table.schema)
self.info.features = self.info.features.reorder_fields_as(inferred_features)
assert self.info.features.type == inferred_features.type
```
The re-ordering is a recursive function. It takes into account that the `Sequence` feature type is a struct of list and not a list of struct.
Now it's possible to load `natural_questions` again :) | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 1,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2438/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2438/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2438.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2438",
"merged_at": "2021-06-04T09:02:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2438.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2438"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5982 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5982/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5982/comments | https://api.github.com/repos/huggingface/datasets/issues/5982/events | https://github.com/huggingface/datasets/issues/5982 | 1,770,333,296 | I_kwDODunzps5phSRw | 5,982 | 404 on Datasets Documentation Page | [] | closed | false | null | 2 | 2023-06-22T20:14:57Z | 2023-06-26T15:45:03Z | 2023-06-26T15:45:03Z | null | ### Describe the bug
Getting a 404 from the Hugging Face Datasets docs page:
https://huggingface.co/docs/datasets/index
### Steps to reproduce the bug
1. Go to URL https://huggingface.co/docs/datasets/index
2. Notice 404 not found
### Expected behavior
URL should either show docs or redirect to new location
### Environment info
hugginface.co | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5982/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5982/timeline | null | completed | null | null | false | [
"This wasn’t working for me a bit earlier, but it looks to be back up now",
"We had a minor issue updating the docs after the latest release. It should work now :)."
] |
https://api.github.com/repos/huggingface/datasets/issues/3710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3710/comments | https://api.github.com/repos/huggingface/datasets/issues/3710/events | https://github.com/huggingface/datasets/pull/3710 | 1,133,955,393 | PR_kwDODunzps4ymQMQ | 3,710 | Fix CI code quality issue | [] | closed | false | null | 0 | 2022-02-12T12:05:39Z | 2022-02-12T12:58:05Z | 2022-02-12T12:58:04Z | null | Fix CI code quality issue introduced by #3695. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3710/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3710/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3710.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3710",
"merged_at": "2022-02-12T12:58:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3710.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3710"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3883/comments | https://api.github.com/repos/huggingface/datasets/issues/3883/events | https://github.com/huggingface/datasets/issues/3883 | 1,164,663,229 | I_kwDODunzps5Fa1m9 | 3,883 | The metric Meteor doesn't work for nltk ==3.6.4 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2022-03-10T02:28:27Z | 2022-03-10T09:03:39Z | 2022-03-10T09:03:39Z | null | ## Describe the bug
Using the metric Meteor with nltk == 3.6.4 gives a TypeError:
TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object
## Steps to reproduce the bug
```python
import datasets
metric = datasets.load_metric("meteor")
predictions = ["hello world"]
references = ["hello world"]
metric.compute(predictions=predictions, references=references)
```
## Expected results
TypeError: descriptor 'lower' for 'str' objects doesn't apply to a 'list' object
I think this TypeError exists because input sentences are tokenized into lists of tokens and the str.lower() is applied to this list of tokens.
## Actual results
No error but a meteor score
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.3
- Platform: linux
- Python version: 3.8.12
- PyArrow version: 7.0.0
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3883/timeline | null | completed | null | null | false | [
"Hi @zhaowei-wang98, thanks for reporting.\r\n\r\nWe are fixing it... "
] |
https://api.github.com/repos/huggingface/datasets/issues/526 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/526/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/526/comments | https://api.github.com/repos/huggingface/datasets/issues/526/events | https://github.com/huggingface/datasets/pull/526 | 684,615,455 | MDExOlB1bGxSZXF1ZXN0NDcyNDczNjcw | 526 | Returning None instead of "python" if dataset is unformatted | [] | closed | false | null | 2 | 2020-08-24T12:10:35Z | 2020-08-24T12:50:43Z | 2020-08-24T12:50:42Z | null | Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/526/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/526/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/526.diff",
"html_url": "https://github.com/huggingface/datasets/pull/526",
"merged_at": "2020-08-24T12:50:42Z",
"patch_url": "https://github.com/huggingface/datasets/pull/526.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/526"
} | true | [
"We have to change the tests to expect `None` instead of `python` then",
"Merging!"
] |
https://api.github.com/repos/huggingface/datasets/issues/2067 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2067/comments | https://api.github.com/repos/huggingface/datasets/issues/2067/events | https://github.com/huggingface/datasets/issues/2067 | 833,559,940 | MDU6SXNzdWU4MzM1NTk5NDA= | 2,067 | Multiprocessing windows error | [] | closed | false | null | 10 | 2021-03-17T09:12:28Z | 2021-08-04T17:59:08Z | 2021-08-04T17:59:08Z | null | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2067/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..",
"```\r\nfrom datasets import load_dataset\r\n\r\ndatase... |
https://api.github.com/repos/huggingface/datasets/issues/1693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1693/comments | https://api.github.com/repos/huggingface/datasets/issues/1693/events | https://github.com/huggingface/datasets/pull/1693 | 780,268,595 | MDExOlB1bGxSZXF1ZXN0NTUwMTc3MDEx | 1,693 | Fix reuters metadata parsing errors | [] | closed | false | null | 0 | 2021-01-06T08:26:03Z | 2021-01-07T23:53:47Z | 2021-01-07T14:01:22Z | null | Was missing the last entry in each metadata category | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1693/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1693/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1693.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1693",
"merged_at": "2021-01-07T14:01:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1693.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1693"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/418 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/418/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/418/comments | https://api.github.com/repos/huggingface/datasets/issues/418/events | https://github.com/huggingface/datasets/issues/418 | 661,914,873 | MDU6SXNzdWU2NjE5MTQ4NzM= | 418 | Addition of google drive links to dl_manager | [] | closed | false | null | 3 | 2020-07-20T14:52:02Z | 2020-07-20T15:39:32Z | 2020-07-20T15:39:32Z | null | Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown.
This is the script for me:
```python
class EmoConfig(nlp.BuilderConfig):
"""BuilderConfig for SQUAD."""
def __init__(self, **kwargs):
"""BuilderConfig for EmoContext.
Args:
**kwargs: keyword arguments forwarded to super.
"""
super(EmoConfig, self).__init__(**kwargs)
_TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing"
_TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing"
class EmoDataset(nlp.GeneratorBasedBuilder):
""" SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """
VERSION = nlp.Version("1.0.0")
force = False
def _info(self):
return nlp.DatasetInfo(
description=_DESCRIPTION,
features=nlp.Features(
{
"text": nlp.Value("string"),
"label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]),
}
),
supervised_keys=None,
homepage="https://www.aclweb.org/anthology/S19-2005/",
citation=_CITATION,
)
def _get_drive_url(self, url):
base_url = 'https://drive.google.com/uc?id='
split_url = url.split('/')
return base_url + split_url[5]
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
if(not os.path.exists("emo-train.json") or self.force):
gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True)
if(not os.path.exists("emo-test.json") or self.force):
gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True)
return [
nlp.SplitGenerator(
name=nlp.Split.TRAIN,
gen_kwargs={
"filepath": "emo-train.json",
"split": "train",
},
),
nlp.SplitGenerator(
name=nlp.Split.TEST,
gen_kwargs={"filepath": "emo-test.json", "split": "test"},
),
]
def _generate_examples(self, filepath, split):
""" Yields examples. """
with open(filepath, 'rb') as f:
data = json.load(f)
for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()):
yield id_, {
"text": text,
"label": label,
}
```
Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/418/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/418/timeline | null | completed | null | null | false | [
"I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . \r\n\r\n@lhoestq ",
"Oh sorry, I think `_get_drive_url` is doing that. \r\n\r\nHave you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`... |
https://api.github.com/repos/huggingface/datasets/issues/1960 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1960/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1960/comments | https://api.github.com/repos/huggingface/datasets/issues/1960/events | https://github.com/huggingface/datasets/pull/1960 | 818,073,154 | MDExOlB1bGxSZXF1ZXN0NTgxNDMzOTY4 | 1,960 | Allow stateful function in dataset.map | [] | closed | false | null | 3 | 2021-02-28T01:29:05Z | 2021-03-23T15:26:49Z | 2021-03-23T15:26:49Z | null | Removes the "test type" section in Dataset.map which would modify the state of the stateful function. Now, the return type of the map function is inferred after processing the first example.
Fixes #1940
@lhoestq Not very happy with the usage of `nonlocal`. Would like to hear your opinion on this. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1960/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1960/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1960.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1960",
"merged_at": "2021-03-23T15:26:49Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1960.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1960"
} | true | [
"@lhoestq Added a test. If you can come up with a better stateful callable, I'm all ears 😄. ",
"Sorry I said earlier that it was good to have it inside the loop, my mistake !",
"@lhoestq Okay, did some refactoring and now the \"cache\" part comes before the for loop. Thanks for the guidance.\r\n\r\nThink this... |
https://api.github.com/repos/huggingface/datasets/issues/663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/663/comments | https://api.github.com/repos/huggingface/datasets/issues/663/events | https://github.com/huggingface/datasets/pull/663 | 706,732,636 | MDExOlB1bGxSZXF1ZXN0NDkxMjI3NzUz | 663 | Created dataset card snli.md | [
{
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets",
"id": 2067401494,
"name": "Dataset discussion",
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion"
}
] | closed | false | null | 11 | 2020-09-22T22:29:37Z | 2020-10-13T17:05:20Z | 2020-10-12T20:26:52Z | null | First draft of a dataset card using the SNLI corpus as an example.
This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around.
- I moved **Who Was Involved** to follow **Language**, both because I thought the authors should be presented more towards the front and because I think it makes sense to present the speakers close to the language so it doesn't have to be repeated.
- I created a section I called **Data Characteristics** by pulling some things out of the other sections. I was thinking that this would be more about the language use in context of the specific task construction. That name isn't very descriptive though and could probably be improved.
-- Domain and language type out of **Language**. I particularly wanted to keep the Language section as simple and as abstracted from the task as possible.
-- 'How was the data collected' out of **Who Was Involved**
-- Normalization out of **Features/Dataset Structure**
-- I also added an annotation process section.
- I kept the **Features** section mostly the same as the Google Doc, but I renamed it **Dataset Structure** to more clearly separate it from the language use, and added some links to the documentation pages.
- I also kept **Tasks Supported**, **Known Limitations**, and **Licensing Information** mostly the same. Looking at it again though, maybe **Tasks Supported** should come before **Data Characteristics**?
The trickiest part about writing a dataset card for the SNLI corpus specifically is that it's built on datasets which are themselves built on datasets so I had to dig in a lot of places to find information. I think this will be easier with other datasets and once there is more uptake of dataset cards so they can just link to each other. (Maybe that needs to be an added section?)
I also made an effort not to repeat information across the sections or to refer to a previous section if the information was relevant in a later one. Is there too much repetition still? | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/663/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/663/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/663.diff",
"html_url": "https://github.com/huggingface/datasets/pull/663",
"merged_at": "2020-10-12T20:26:52Z",
"patch_url": "https://github.com/huggingface/datasets/pull/663.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/663"
} | true | [
"Adding a direct link to the rendered markdown:\r\nhttps://github.com/mcmillanmajora/datasets/blob/add_dataset_documentation/datasets/snli/README.md\r\n",
"It would be amazing if we ended up with this much information on all of our datasets :) \r\n\r\nI don't think there's too much repetition, everything that is ... |
https://api.github.com/repos/huggingface/datasets/issues/1066 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1066/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1066/comments | https://api.github.com/repos/huggingface/datasets/issues/1066/events | https://github.com/huggingface/datasets/pull/1066 | 756,391,957 | MDExOlB1bGxSZXF1ZXN0NTMxOTQ0MDc0 | 1,066 | Add ChrEn | [] | closed | false | null | 3 | 2020-12-03T17:17:48Z | 2020-12-03T21:49:39Z | 2020-12-03T21:49:39Z | null | Adding the Cherokee English machine translation dataset of https://github.com/ZhangShiyue/ChrEn | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1066/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1066/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1066.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1066",
"merged_at": "2020-12-03T21:49:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1066.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1066"
} | true | [
"I just saw your PR actually ^^",
"> I just saw your PR actually ^^\r\n\r\nSomehow that still doesn't work, lmk if you have any ideas.",
"Did you rebase from master ?"
] |
https://api.github.com/repos/huggingface/datasets/issues/3574 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3574/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3574/comments | https://api.github.com/repos/huggingface/datasets/issues/3574/events | https://github.com/huggingface/datasets/pull/3574 | 1,101,781,401 | PR_kwDODunzps4w7vu6 | 3,574 | Fix qa4mre tags | [] | closed | false | null | 0 | 2022-01-13T13:56:59Z | 2022-01-13T14:03:02Z | 2022-01-13T14:03:01Z | null | The YAML tags were invalid. I also fixed the dataset mirroring logging that failed because of this issue [here](https://github.com/huggingface/datasets/actions/runs/1690109581) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3574/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3574/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3574.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3574",
"merged_at": "2022-01-13T14:03:01Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3574.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3574"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/309 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/309/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/309/comments | https://api.github.com/repos/huggingface/datasets/issues/309/events | https://github.com/huggingface/datasets/pull/309 | 644,783,822 | MDExOlB1bGxSZXF1ZXN0NDM5MzQ1NzYz | 309 | Add narrative qa | [] | closed | false | null | 11 | 2020-06-24T17:26:18Z | 2020-09-03T09:02:10Z | 2020-09-03T09:02:09Z | null | Test cases for dummy data don't pass
Only contains data for summaries (not whole story) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/309/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/309/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/309.diff",
"html_url": "https://github.com/huggingface/datasets/pull/309",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/309.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/309"
} | true | [
"Does it make sense to download the full stories? I remember attempting to implement this dataset a while ago and ended up with something like:\r\n```python\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n dl_dir = dl_manager.download_and_extract(_DOWNLO... |
https://api.github.com/repos/huggingface/datasets/issues/4261 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4261/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4261/comments | https://api.github.com/repos/huggingface/datasets/issues/4261/events | https://github.com/huggingface/datasets/issues/4261 | 1,221,883,779 | I_kwDODunzps5I1HeD | 4,261 | data leakage in `webis/conclugen` dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 5 | 2022-04-30T17:43:37Z | 2022-05-03T06:04:26Z | 2022-05-03T06:04:26Z | null | ## Describe the bug
Some samples (argument-conclusion pairs) in the *training* split of the `webis/conclugen` dataset are present in both the *validation* and *test* splits, creating data leakage and distorting model results.
Furthermore, all splits contain duplicate samples.
## Steps to reproduce the bug
```python
from datasets import load_dataset
training = load_dataset("webis/conclugen", "base", split="train")
validation = load_dataset("webis/conclugen", "base", split="validation")
testing = load_dataset("webis/conclugen", "base", split="test")
# collect which sample id's are present in the training split
ids_validation = list()
ids_testing = list()
for train_sample in training:
train_argument = train_sample["argument"]
train_conclusion = train_sample["conclusion"]
train_id = train_sample["id"]
# test if current sample is in validation split
if train_argument in validation["argument"]:
for validation_sample in validation:
validation_argument = validation_sample["argument"]
validation_conclusion = validation_sample["conclusion"]
validation_id = validation_sample["id"]
if train_argument == validation_argument and train_conclusion == validation_conclusion:
ids_validation.append(validation_id)
# test if current sample is in test split
if train_argument in testing["argument"]:
for testing_sample in testing:
testing_argument = testing_sample["argument"]
testing_conclusion = testing_sample["conclusion"]
testing_id = testing_sample["id"]
if train_argument == testing_argument and train_conclusion == testing_conclusion:
ids_testing.append(testing_id)
```
## Expected results
Length of both lists `ids_validation` and `ids_testing` should be zero.
## Actual results
Length of `ids_validation` = `2556`
Length of `ids_testing` = `287`
Furthermore, there seems to be duplicate samples in (at least) the *training* split, since:
`print(len(set(ids_validation)))` = `950`
`print(len(set(ids_testing)))` = `101`
All in all, around 7% of the samples of each the *validation* and *test* split seems to be present in the *training* split.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 1.18.4
- Platform: macOS-12.3.1-arm64-arm-64bit
- Python version: 3.9.10
- PyArrow version: 7.0.0 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4261/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4261/timeline | null | completed | null | null | false | [
"Hi @xflashxx, thanks for reporting.\r\n\r\nPlease note that this dataset was generated and shared by Webis Group: https://huggingface.co/webis\r\n\r\nWe are contacting the dataset owners to inform them about the issue you found. We'll keep you updated of their reply.",
"i'd suggest just pinging the authors here ... |
https://api.github.com/repos/huggingface/datasets/issues/5271 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5271/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5271/comments | https://api.github.com/repos/huggingface/datasets/issues/5271/events | https://github.com/huggingface/datasets/pull/5271 | 1,456,807,738 | PR_kwDODunzps5DTDX1 | 5,271 | Fix #5269 | [] | closed | false | null | 1 | 2022-11-20T07:50:49Z | 2022-11-21T15:07:19Z | 2022-11-21T15:06:38Z | null | ```
$ datasets-cli convert --datasets_directory <TAB>
datasets_directory
benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/
```
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5271/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5271/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5271.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5271",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/5271.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5271"
} | true | [
"See <https://github.com/huggingface/datasets/issues/5269>"
] |
https://api.github.com/repos/huggingface/datasets/issues/2131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2131/comments | https://api.github.com/repos/huggingface/datasets/issues/2131/events | https://github.com/huggingface/datasets/issues/2131 | 843,133,112 | MDU6SXNzdWU4NDMxMzMxMTI= | 2,131 | When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 3 | 2021-03-29T08:45:58Z | 2021-04-10T11:08:55Z | 2021-04-10T11:08:55Z | null | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py", line 316, in <module>
73 | | main()
74 | | File "run_gpt.py", line 222, in main
75 | | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"])
76 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset
77 | | use_auth_token=use_auth_token,
78 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare
79 | | self.download_post_processing_resources(dl_manager)
80 | | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources
81 | | for split in self.info.splits:
82 | | TypeError: 'NoneType' object is not iterable
83 | | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2)
84 | | Traceback (most recent call last):
85 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
86 | | "__main__", mod_spec)
87 | | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code
88 | | exec(code, run_globals)
89 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module>
90 | | main()
91 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main
92 | | sigkill_handler(signal.SIGTERM, None) # not coming back
93 | | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler
94 | | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd)
```
On worker 1 it loads the dataset well, however on worker 2 will get this error.
And I will meet this error from time to time, sometimes it just goes well. | {
"+1": 0,
"-1": 0,
"confused": 1,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2131/timeline | null | completed | null | null | false | [
"Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue",
"The PR got merged :)\r\nFeel free to try it out on the `master` br... |
https://api.github.com/repos/huggingface/datasets/issues/5473 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5473/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5473/comments | https://api.github.com/repos/huggingface/datasets/issues/5473/events | https://github.com/huggingface/datasets/pull/5473 | 1,558,668,197 | PR_kwDODunzps5Inm9h | 5,473 | Set dev version | [] | closed | false | null | 3 | 2023-01-26T19:34:44Z | 2023-01-26T19:47:34Z | 2023-01-26T19:38:30Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5473/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5473/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5473.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5473",
"merged_at": "2023-01-26T19:38:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5473.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5473"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/2206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2206/comments | https://api.github.com/repos/huggingface/datasets/issues/2206/events | https://github.com/huggingface/datasets/issues/2206 | 855,252,415 | MDU6SXNzdWU4NTUyNTI0MTU= | 2,206 | Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2021-04-11T08:40:09Z | 2021-11-10T12:18:30Z | 2021-11-10T12:04:28Z | null | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_single
writer.write(example)
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 296, in write
self.write_on_file()
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 270, in write_on_file
pa_array = pa.array(typed_sequence)
File "pyarrow/array.pxi", line 222, in pyarrow.lib.array
File "pyarrow/array.pxi", line 110, in pyarrow.lib._handle_arrow_array_protocol
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_writer.py", line 108, in __arrow_array__
out = out.cast(pa.list_(self.optimized_int_type))
File "pyarrow/array.pxi", line 810, in pyarrow.lib.Array.cast
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/pyarrow/compute.py", line 281, in cast
return call_function("cast", [arr], options)
File "pyarrow/_compute.pyx", line 465, in pyarrow._compute.call_function
File "pyarrow/_compute.pyx", line 294, in pyarrow._compute.Function.call
File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Integer value 50259 not in range: -128 to 127
Do you have any idea about it? | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2206/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2206/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assume... |
https://api.github.com/repos/huggingface/datasets/issues/1109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1109/comments | https://api.github.com/repos/huggingface/datasets/issues/1109/events | https://github.com/huggingface/datasets/pull/1109 | 757,055,702 | MDExOlB1bGxSZXF1ZXN0NTMyNDk1MDk2 | 1,109 | add woz_dialogue | [] | closed | false | null | 0 | 2020-12-04T12:13:07Z | 2020-12-05T15:41:23Z | 2020-12-05T15:40:18Z | null | Adding Wizard-of-Oz task oriented dialogue dataset
https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz
https://arxiv.org/abs/1604.04562 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1109/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1109.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1109",
"merged_at": "2020-12-05T15:40:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1109.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1109"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4/comments | https://api.github.com/repos/huggingface/datasets/issues/4/events | https://github.com/huggingface/datasets/issues/4 | 600,185,417 | MDU6SXNzdWU2MDAxODU0MTc= | 4 | [Feature] Keep the list of labels of a dataset as metadata | [] | closed | false | null | 6 | 2020-04-15T10:17:10Z | 2020-07-08T16:59:46Z | 2020-05-04T06:11:57Z | null | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4/timeline | null | completed | null | null | false | [
"Yes! I see mostly two options for this:\r\n- a `Feature` approach like currently (but we might deprecate features)\r\n- wrapping in a smart way the Dictionary arrays of Arrow: https://arrow.apache.org/docs/python/data.html?highlight=dictionary%20encode#dictionary-arrays",
"I would have a preference for the secon... |
https://api.github.com/repos/huggingface/datasets/issues/4468 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4468/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4468/comments | https://api.github.com/repos/huggingface/datasets/issues/4468/events | https://github.com/huggingface/datasets/pull/4468 | 1,266,715,742 | PR_kwDODunzps45bERK | 4,468 | Generalize tutorials for audio and vision | [
{
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation",
"id": 1935892861,
"name": "documentation",
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation"
}
] | closed | false | null | 1 | 2022-06-09T22:00:44Z | 2022-06-14T16:22:02Z | 2022-06-14T16:12:00Z | null | This PR updates the tutorials to be more generalizable to all modalities. After reading the tutorials, a user should be able to load any type of dataset, know how to index into and slice a dataset, and do the most basic/common type of preprocessing (tokenization, resampling, applying transforms) depending on their dataset.
Other changes include:
- Removed the sections about a dataset's metadata, features, and columns because we cover this in an earlier tutorial about inspecting the `DatasetInfo` through the dataset builder.
- Separate the sharing dataset tutorial into two sections: (1) uploading via the web interface and (2) using the `huggingface_hub` library.
- Renamed some tutorials in the TOC to be more clear and specific.
- Added more text to nudge users towards joining the community and asking questions on the forums.
- If it's okay with everyone, I'd also like to remove the section about loading and using metrics since we have the `evaluate` docs now.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4468/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4468/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4468.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4468",
"merged_at": "2022-06-14T16:12:00Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4468.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4468"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3971 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3971/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3971/comments | https://api.github.com/repos/huggingface/datasets/issues/3971/events | https://github.com/huggingface/datasets/pull/3971 | 1,174,329,442 | PR_kwDODunzps40sS4W | 3,971 | Applied index-filters on scores in search.py. | [] | closed | false | null | 1 | 2022-03-19T18:43:42Z | 2022-04-12T14:48:23Z | 2022-04-12T14:41:58Z | null | Updated search.py to resolve the issue mentioned in https://github.com/huggingface/datasets/issues/3961.
Applied index-filters on scores in get_nearest_examples and get_nearest_examples_batch methods of search.py. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3971/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3971/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/3971.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3971",
"merged_at": "2022-04-12T14:41:58Z",
"patch_url": "https://github.com/huggingface/datasets/pull/3971.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3971"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.