id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
815,985,167
1,941
Loading of FAISS index fails for index_name = 'exact'
Hi, It looks like loading of FAISS index now fails when using index_name = 'exact'. For example, from the RAG [model card](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage). Running `transformers==4.3.2` and datasets installed from source on latest `master` branch. ```bash (venv) sergey_mkrtchyan datasets (master) $ python Python 3.8.6 (v3.8.6:db455296be, Sep 23 2020, 13:31:39) [Clang 6.0 (clang-600.0.57)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration >>> tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-nq") >>> retriever = RagRetriever.from_pretrained("facebook/rag-token-nq", index_name="exact", use_dummy_dataset=True) Using custom data configuration dummy.psgs_w100.nq.no_index-dummy=True,with_index=False Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.no_index-dummy=True,with_index=False/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) Using custom data configuration dummy.psgs_w100.nq.exact-50b6cda57ff32ab4 Reusing dataset wiki_dpr (/Users/sergey_mkrtchyan/.cache/huggingface/datasets/wiki_dpr/dummy.psgs_w100.nq.exact-50b6cda57ff32ab4/0.0.0/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb) 0%| | 0/10 [00:00<?, ?it/s] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 425, in from_pretrained return cls( File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 387, in __init__ self.init_retrieval() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 458, in init_retrieval self.index.init_index() File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/transformers/models/rag/retrieval_rag.py", line 284, in init_index self.dataset = load_dataset( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 734, in as_dataset datasets = utils.map_nested( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/utils/py_utils.py", line 195, in map_nested return function(data_struct) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/builder.py", line 769, in _build_single_dataset post_processed = self._post_process(ds, resources_paths) File "/Users/sergey_mkrtchyan/.cache/huggingface/modules/datasets_modules/datasets/wiki_dpr/8a97e0f4fa5bc46e179474db6a61b09d5d2419d2911835bd3f91d110c936d8bb/wiki_dpr.py", line 205, in _post_process dataset.add_faiss_index("embeddings", custom_index=index) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/arrow_dataset.py", line 2516, in add_faiss_index super().add_faiss_index( File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 416, in add_faiss_index faiss_index.add_vectors(self, column=column, train_size=train_size, faiss_verbose=faiss_verbose) File "/Users/sergey_mkrtchyan/workspace/huggingface/datasets/src/datasets/search.py", line 281, in add_vectors self.faiss_index.add(vecs) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/__init__.py", line 104, in replacement_add self.add_c(n, swig_ptr(x)) File "/Users/sergey_mkrtchyan/workspace/cformers/venv/lib/python3.8/site-packages/faiss/swigfaiss.py", line 3263, in add return _swigfaiss.IndexHNSW_add(self, n, x) RuntimeError: Error in virtual void faiss::IndexHNSW::add(faiss::Index::idx_t, const float *) at /Users/runner/work/faiss-wheels/faiss-wheels/faiss/faiss/IndexHNSW.cpp:356: Error: 'is_trained' failed >>> ``` The issue seems to be related to the scalar quantization in faiss added in this commit: 8c5220307c33f00e01c3bf7b8. Reverting it fixes the issue.
closed
https://github.com/huggingface/datasets/issues/1941
2021-02-25T01:30:54
2021-02-25T14:28:46
2021-02-25T14:28:46
{ "login": "mkserge", "id": 2992022, "type": "User" }
[]
false
[]
815,770,012
1,940
Side effect when filtering data due to `does_function_return_dict` call in `Dataset.map()`
Hi there! In my codebase I have a function to filter rows in a dataset, selecting only a certain number of examples per class. The function passes a extra argument to maintain a counter of the number of dataset rows/examples already selected per each class, which are the ones I want to keep in the end: ```python def fill_train_examples_per_class(example, per_class_limit: int, counter: collections.Counter): label = int(example['label']) current_counter = counter.get(label, 0) if current_counter < per_class_limit: counter[label] = current_counter + 1 return True return False ``` At some point I invoke it through the `Dataset.filter()` method in the `arrow_dataset.py` module like this: ```python ... kwargs = {"per_class_limit": train_examples_per_class_limit, "counter": Counter()} datasets['train'] = datasets['train'].filter(fill_train_examples_per_class, num_proc=1, fn_kwargs=kwargs) ... ``` The problem is that, passing a stateful container (the counter,) provokes a side effect in the new filtered dataset obtained. This is due to the fact that at some point in `filter()`, the `map()`'s function `does_function_return_dict` is invoked in line [1290](https://github.com/huggingface/datasets/blob/96578adface7e4bc1f3e8bafbac920d72ca1ca60/src/datasets/arrow_dataset.py#L1290). When this occurs, the state of the counter is initially modified by the effects of the function call on the 1 or 2 rows selected in lines 1288 and 1289 of the same file (which are marked as `test_inputs` & `test_indices` respectively in lines 1288 and 1289. This happens out of the control of the user (which for example can't reset the state of the counter before continuing the execution,) provoking in the end an undesired side effect in the results obtained. In my case, the resulting dataset -despite of the counter results are ok- lacks an instance of the classes 0 and 1 (which happen to be the classes of the first two examples of my dataset.) The rest of the classes I have in my dataset, contain the right number of examples as they were not affected by the effects of `does_function_return_dict` call. I've debugged my code extensively and made a workaround myself hardcoding the necessary stuff (basically putting `update_data=True` in line 1290,) and then I obtain the results I expected without the side effect. Is there a way to avoid that call to `does_function_return_dict` in map()'s line 1290 ? (e.g. extracting the required information that `does_function_return_dict` returns without making the testing calls to the user function on dataset rows 0 & 1) Thanks in advance, Francisco Perez-Sorrosal
closed
https://github.com/huggingface/datasets/issues/1940
2021-02-24T19:18:56
2021-03-23T15:26:49
2021-03-23T15:26:49
{ "login": "francisco-perez-sorrosal", "id": 918006, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
815,680,510
1,939
[firewalled env] OFFLINE mode
This issue comes from a need to be able to run `datasets` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls. I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it. ## 1. Manual manually prepare data and metrics files, that is transfer to the firewalled instance the dataset and the metrics and run: ``` DATASETS_OFFLINE=1 run_seq2seq.py --train_file xyz.csv --validation_file xyz.csv ... ``` `datasets` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed. ## 2. Automatic In some clouds one can prepare a datastorage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice: 1. on the non-firewalled instance: ``` run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ... ``` which should download and cached everything. 2. and then immediately after on the firewalled instance, which shares the same filesystem ``` DATASETS_OFFLINE=1 run_seq2seq.py --dataset_name wmt16 --dataset_config ro-en ... ``` and the metrics and datasets should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online. ## Common Issues 1. for example currently `datasets` tries to look up online datasets if the files contain json or csv, despite the paths already provided ``` if dataset and path in _PACKAGED_DATASETS_MODULES: ``` 2. it has an issue with metrics. e.g. I had to manually copy `rouge/rouge.py` from the `datasets` repo to the current dir - or it was hanging. I had to comment out `head_hf_s3(...)` calls to make things work. So all those `try: head_hf_s3(...)` shouldn't be tried with `DATASETS_OFFLINE=1` Here is the corresponding issue for `transformers`: https://github.com/huggingface/transformers/issues/10379 Thanks.
closed
https://github.com/huggingface/datasets/issues/1939
2021-02-24T17:13:42
2021-03-05T05:09:54
2021-03-05T05:09:54
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
false
[]
815,647,774
1,938
Disallow ClassLabel with no names
It was possible to create a ClassLabel without specifying the names or the number of classes. This was causing silent issues as in #1936 and breaking the conversion methods str2int and int2str. cc @justin-yan
closed
https://github.com/huggingface/datasets/pull/1938
2021-02-24T16:37:57
2021-02-25T11:27:29
2021-02-25T11:27:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
815,163,943
1,937
CommonGen dataset page shows an error OSError: [Errno 28] No space left on device
The page of the CommonGen data https://huggingface.co/datasets/viewer/?dataset=common_gen shows ![image](https://user-images.githubusercontent.com/10104354/108959311-1865e600-7629-11eb-868c-cf4cb27034ea.png)
closed
https://github.com/huggingface/datasets/issues/1937
2021-02-24T06:47:33
2021-02-26T11:10:06
2021-02-26T11:10:06
{ "login": "yuchenlin", "id": 10104354, "type": "User" }
[ { "name": "nlp-viewer", "color": "94203D" } ]
false
[]
814,726,512
1,936
[WIP] Adding Support for Reading Pandas Category
@lhoestq - continuing our conversation from https://github.com/huggingface/datasets/issues/1906#issuecomment-784247014 The goal of this PR is to support `Dataset.from_pandas(df)` where the dataframe contains a Category. Just the 4 line change below actually does seem to work: ``` >>> from datasets import Dataset >>> import pandas as pd >>> df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) >>> ds = Dataset.from_pandas(df) >>> ds.to_pandas() 0 0 a 1 b 2 c 3 a >>> ds.to_pandas().dtypes 0 category dtype: object ``` save_to_disk, etc. all seem to work as well. The main things that are theoretically "incorrect" if we leave this are: ``` >>> ds.features.type StructType(struct<0: int64>) ``` there are a decent number of references to this property in the library, but I can't find anything that seems to actually break as a result of this being int64 vs. dictionary? I think the gist of my question is: a) do we *need* to change the dtype of Classlabel and have get_nested_type return a pyarrow.DictionaryType instead of int64? and b) do you *want* it to change? The biggest challenge I see to implementing this correctly is that the data will need to be passed in along with the pyarrow schema when instantiating the Classlabel (I *think* this is unavoidable, since the type itself doesn't contain the actual label values) which could be a fairly intrusive change - e.g. `from_arrow_schema`'s interface would need to change to include optional arrow data? Once we start going down this path of modifying the public interfaces I am admittedly feeling a little bit outside of my comfort zone Additionally I think `int2str`, `str2int`, and `encode_example` probably won't work - but I can't find any usages of them in the library itself.
closed
https://github.com/huggingface/datasets/pull/1936
2021-02-23T18:32:54
2022-03-09T18:46:22
2022-03-09T18:46:22
{ "login": "justin-yan", "id": 7731709, "type": "User" }
[]
true
[]
814,623,827
1,935
add CoVoST2
This PR adds the CoVoST2 dataset for speech translation and ASR. https://github.com/facebookresearch/covost#covost-2 The dataset requires manual download as the download page requests an email address and the URLs are temporary. The dummy data is a bit bigger because of the mp3 files and 36 configs.
closed
https://github.com/huggingface/datasets/pull/1935
2021-02-23T16:28:16
2021-02-24T18:09:32
2021-02-24T18:05:09
{ "login": "patil-suraj", "id": 27137566, "type": "User" }
[]
true
[]
814,437,190
1,934
Add Stanford Sentiment Treebank (SST)
I am going to add SST: - **Name:** The Stanford Sentiment Treebank - **Description:** The first corpus with fully labeled parse trees that allows for a complete analysis of the compositional effects of sentiment in language - **Paper:** [Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank](https://nlp.stanford.edu/~socherr/EMNLP2013_RNTN.pdf) - **Data:** https://nlp.stanford.edu/sentiment/index.html - **Motivation:** Already requested in #353, SST is a popular dataset for Sentiment Classification What's the difference with the [_SST-2_](https://huggingface.co/datasets/viewer/?dataset=glue&config=sst2) dataset included in GLUE? Essentially, SST-2 is a version of SST where: - the labels were mapped from real numbers in [0.0, 1.0] to a binary label: {0, 1} - the labels of the *sub-sentences* were included only in the training set - the labels in the test set are obfuscated So there is a lot more information in the original SST. The tricky bit is, the data is scattered into many text files and, for one in particular, I couldn't find the original encoding ([*but I'm not the only one*](https://groups.google.com/g/word2vec-toolkit/c/QIUjLw6RqFk/m/_iEeyt428wkJ) 🎵). The only solution I found was to manually replace all the è, ë, ç and so on into an `utf-8` copy of the text file. I uploaded the result in my Dropbox and I am using that as the main repo for the dataset. Also, the _sub-sentences_ are built at run-time from the information encoded in several text files, so generating the examples is a bit more cumbersome than usual. Luckily, the dataset is not enormous. I plan to divide the dataset in 2 configs: one with just whole sentences with their labels, the other with sentences _and their sub-sentences_ with their labels. Each config will be split in train, validation and test. Hopefully this makes sense, we may discuss it in the PR I'm going to submit.
closed
https://github.com/huggingface/datasets/issues/1934
2021-02-23T12:53:16
2021-03-18T17:51:44
2021-03-18T17:51:44
{ "login": "patpizio", "id": 15801338, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
814,335,846
1,933
Use arrow ipc file format
According to the [documentation](https://arrow.apache.org/docs/format/Columnar.html?highlight=arrow1#ipc-file-format), it's identical to the streaming format except that it contains the memory offsets of each sample: > We define a “file format” supporting random access that is build with the stream format. The file starts and ends with a magic string ARROW1 (plus padding). What follows in the file is identical to the stream format. At the end of the file, we write a footer containing a redundant copy of the schema (which is a part of the streaming format) plus memory offsets and sizes for each of the data blocks in the file. This enables random access any record batch in the file. See File.fbs for the precise details of the file footer. Since it stores more metadata regarding the positions of the examples in the file, it should enable better example retrieval performances. However from the discussion in https://github.com/huggingface/datasets/issues/1803 it looks like it's not the case unfortunately. Maybe in the future this will allow speed gains. I think it's still a good idea to start using it anyway for these reasons: - in the future we may have speed gains - it contains the arrow streaming format data - it's compatible with the pyarrow Dataset implementation (it allows to load remote dataframes for example) if we want to use it in the future - it's also the format used by arrow feather if we want to use it in the future - it's roughly the same size as the streaming format - it's easy to have backward compatibility with the streaming format
closed
https://github.com/huggingface/datasets/pull/1933
2021-02-23T10:38:24
2023-10-30T16:20:19
2023-09-25T09:20:38
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
814,326,116
1,932
Fix builder config creation with data_dir
The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custom BuilderConfig the same as an available...")` I fixed that by commenting the line that used to ignore the data_dir when creating the config. It was previously ignored before the introduction of config id because we didn't want to change the config name. Now it's fine to take it into account for the config id. Now creating a config with a data_dir works again @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/1932
2021-02-23T10:26:02
2021-02-23T10:45:28
2021-02-23T10:45:27
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
814,225,074
1,931
add m_lama (multilingual lama) dataset
Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf
closed
https://github.com/huggingface/datasets/pull/1931
2021-02-23T08:11:57
2021-03-01T10:01:03
2021-03-01T10:01:03
{ "login": "pdufter", "id": 13961899, "type": "User" }
[]
true
[]
814,055,198
1,930
updated the wino_bias dataset
Updated the wino_bias.py script. - updated the data_url - added different configurations for different data splits - added the coreference_cluster to the data features
closed
https://github.com/huggingface/datasets/pull/1930
2021-02-23T03:07:40
2021-04-07T15:24:56
2021-04-07T15:24:56
{ "login": "JieyuZhao", "id": 22306304, "type": "User" }
[]
true
[]
813,929,669
1,929
Improve typing and style and fix some inconsistencies
This PR: * improves typing (mostly more consistent use of `typing.Optional`) * `DatasetDict.cleanup_cache_files` now correctly returns a dict * replaces `dict()` with the corresponding literal * uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying
closed
https://github.com/huggingface/datasets/pull/1929
2021-02-22T22:47:41
2021-02-24T16:16:14
2021-02-24T14:03:54
{ "login": "mariosasko", "id": 47462742, "type": "User" }
[]
true
[]
813,793,434
1,928
Updating old cards
Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli). For the most part, the information was just rearranged or rephrased, but the social impact statements are new.
closed
https://github.com/huggingface/datasets/pull/1928
2021-02-22T19:26:04
2021-02-23T18:19:25
2021-02-23T18:19:25
{ "login": "mcmillanmajora", "id": 26722925, "type": "User" }
[]
true
[]
813,768,935
1,927
Update dataset card of wino_bias
Updated the info for the wino_bias dataset.
closed
https://github.com/huggingface/datasets/pull/1927
2021-02-22T18:51:34
2022-09-23T13:35:09
2022-09-23T13:35:08
{ "login": "JieyuZhao", "id": 22306304, "type": "User" }
[ { "name": "dataset contribution", "color": "0e8a16" } ]
true
[]
813,607,994
1,926
Fix: Wiki_dpr - add missing scalar quantizer
All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done. The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG. The quantizer reduces the size of the index a lot but increases index building time.
closed
https://github.com/huggingface/datasets/pull/1926
2021-02-22T15:32:05
2021-02-22T15:49:54
2021-02-22T15:49:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
813,600,902
1,925
Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index"
Fix the bugs noticed in #1915 There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`). Another issue was that setting `index_name="no_index"` didn't set `with_index` to False. I fixed both of them and added dummy data for those configurations for testing.
closed
https://github.com/huggingface/datasets/pull/1925
2021-02-22T15:23:46
2021-02-25T01:33:48
2021-02-22T15:36:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
813,599,733
1,924
Anonymous Dataset Addition (i.e Anonymous PR?)
Hello, Thanks a lot for your librairy. We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ? Cheers @eusip
closed
https://github.com/huggingface/datasets/issues/1924
2021-02-22T15:22:30
2022-10-05T13:07:11
2022-10-05T13:07:11
{ "login": "PierreColombo", "id": 22492839, "type": "User" }
[]
false
[]
813,363,472
1,923
Fix save_to_disk with relative path
As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step. I added the `makedirs` call using `fs.makedirs` in order to support remote filesystems. I also fixed the issue with the target path being the temporary path. I added a test case for relative paths as well for save_to_disk. Thanks to @M-Salti for reporting and investigating
closed
https://github.com/huggingface/datasets/pull/1923
2021-02-22T10:27:19
2021-02-22T11:22:44
2021-02-22T11:22:43
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
813,140,806
1,922
How to update the "wino_bias" dataset
Hi all, Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that? Thanks!
open
https://github.com/huggingface/datasets/issues/1922
2021-02-22T05:39:39
2021-02-22T10:35:59
null
{ "login": "JieyuZhao", "id": 22306304, "type": "User" }
[]
false
[]
812,716,042
1,921
Standardizing datasets dtypes
This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets. This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here, with `float32` and `float64` acting as the official datasets dtypes, which resolves the tension between `double` being the pyarrow dtype and `float64` being the pyarrow type factory function.
closed
https://github.com/huggingface/datasets/pull/1921
2021-02-20T22:04:01
2021-02-22T09:44:10
2021-02-22T09:44:10
{ "login": "justin-yan", "id": 7731709, "type": "User" }
[]
true
[]
812,628,220
1,920
Fix save_to_disk issue
Fixes #1919
closed
https://github.com/huggingface/datasets/pull/1920
2021-02-20T14:22:39
2021-02-22T10:30:11
2021-02-22T10:30:11
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[]
true
[]
812,626,872
1,919
Failure to save with save_to_disk
When I try to save a dataset locally using the `save_to_disk` method I get the error: ```bash FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow' ``` To replicate: 1. Install `datasets` from master 2. Run this code: ```python from datasets import load_dataset squad = load_dataset("squad") # or any other dataset squad.save_to_disk("squad") # error here ``` The problem is that the method is not creating a directory with the name `dataset_path` for saving the dataset in (i.e. it's not creating the *train* and *validation* directories in this case). After creating the directory the problem resolves. I'll open a PR soon doing that and linking this issue.
closed
https://github.com/huggingface/datasets/issues/1919
2021-02-20T14:18:10
2021-03-03T17:40:27
2021-03-03T17:40:27
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[]
false
[]
812,541,510
1,918
Fix QA4MRE download URLs
The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating.
closed
https://github.com/huggingface/datasets/pull/1918
2021-02-20T07:32:17
2021-02-22T13:35:06
2021-02-22T13:35:06
{ "login": "M-Salti", "id": 9285264, "type": "User" }
[]
true
[]
812,390,178
1,917
UnicodeDecodeError: windows 10 machine
Windows 10 Php 3.6.8 when running ``` import datasets oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am") print(oscar_am["train"][0]) ``` I get the following error ``` file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode return codecs.charmap_decode(input,self.errors,decoding_table)[0] UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 58: character maps to <undefined> ```
closed
https://github.com/huggingface/datasets/issues/1917
2021-02-19T22:13:05
2021-02-19T22:41:11
2021-02-19T22:40:28
{ "login": "yosiasz", "id": 900951, "type": "User" }
[]
false
[]
812,291,984
1,916
Remove unused py_utils objects
Remove unused/unnecessary py_utils functions/classes.
closed
https://github.com/huggingface/datasets/pull/1916
2021-02-19T19:51:25
2021-02-22T14:56:56
2021-02-22T13:32:49
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
812,229,654
1,915
Unable to download `wiki_dpr`
I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran: `curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")` However, I got the following error: `datasets.utils.info_utils.UnexpectedDownloadedFile: {'embeddings_index'}` I tried adding in flags `with_embeddings=False` and `with_index=False`: `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False, embeddings_name="multiset", index_name="no_index")` But I got the following error: `raise ExpectedMoreDownloadedFiles(str(set(expected_checksums) - set(recorded_checksums))) datasets.utils.info_utils.ExpectedMoreDownloadedFiles: {‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_5’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_15’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_30’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_36’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_18’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_41’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_13’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_48’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_10’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_23’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_14’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_34’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_43’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_40’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_47’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_3’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_24’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_7’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_33’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_46’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_42’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_27’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_29’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_26’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_22’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_4’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_20’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_39’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_6’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_16’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_8’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_35’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_49’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_17’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_25’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_0’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_38’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_12’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_44’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_1’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_32’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_19’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_31’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_37’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_9’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_11’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_21’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_28’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_45’, ‘https://dl.fbaipublicfiles.com/rag/rag_multiset_embeddings/wiki_passages_2’}` Is there anything else I need to set to download the dataset? **UPDATE**: just running `curr_dataset = load_dataset("wiki_dpr", with_embeddings=False, with_index=False)` gives me the same error.
closed
https://github.com/huggingface/datasets/issues/1915
2021-02-19T18:11:32
2021-03-03T17:40:48
2021-03-03T17:40:48
{ "login": "nitarakad", "id": 18504534, "type": "User" }
[]
false
[]
812,149,201
1,914
Fix logging imports and make all datasets use library logger
Fix library relative logging imports and make all datasets use library logger.
closed
https://github.com/huggingface/datasets/pull/1914
2021-02-19T16:12:34
2021-02-21T19:48:03
2021-02-21T19:48:03
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
812,127,307
1,913
Add keep_linebreaks parameter to text loader
As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset. cc @sgugger @jncasey
closed
https://github.com/huggingface/datasets/pull/1913
2021-02-19T15:43:45
2021-02-19T18:36:12
2021-02-19T18:36:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
812,034,140
1,912
Update: WMT - use mirror links
As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts. Now downloading the wmt datasets is blazing fast :) cc @stas00 @patrickvonplaten
closed
https://github.com/huggingface/datasets/pull/1912
2021-02-19T13:42:34
2021-02-24T13:44:53
2021-02-24T13:44:53
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
812,009,956
1,911
Saving processed dataset running infinitely
I have a text dataset of size 220M. For pre-processing, I need to tokenize this and filter rows with the large sequence. My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes. filter() function was way to slow, so I used a hack to use pyarrow filter table function, which is damm fast. Mentioned [here](https://github.com/huggingface/datasets/issues/1796) ```dataset._data = dataset._data.filter(...)``` It took 1 hr for the filter. Then i use `save_to_disk()` on processed dataset and it is running forever. I have been waiting since 8 hrs, it has not written a single byte. Infact it has actually read from disk more than 100GB, screenshot below shows the stats using `iotop`. Second process is the one. <img width="1672" alt="Screenshot 2021-02-19 at 6 36 53 PM" src="https://user-images.githubusercontent.com/20911334/108508197-7325d780-72e1-11eb-8369-7c057d137d81.png"> I am not able to figure out, whether this is some issue with dataset library or that it is due to my hack for filter() function.
open
https://github.com/huggingface/datasets/issues/1911
2021-02-19T13:09:19
2021-02-23T07:34:44
null
{ "login": "ayubSubhaniya", "id": 20911334, "type": "User" }
[]
false
[]
811,697,108
1,910
Adding CoNLLpp dataset.
closed
https://github.com/huggingface/datasets/pull/1910
2021-02-19T05:12:30
2021-03-04T22:02:47
2021-03-04T22:02:47
{ "login": "ZihanWangKi", "id": 21319243, "type": "User" }
[]
true
[]
811,520,569
1,907
DBPedia14 Dataset Checksum bug?
Hi there!!! I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error: ``` Traceback (most recent call last): File "./conditional_classification/basic_pipeline.py", line 178, in <module> main() File "./conditional_classification/basic_pipeline.py", line 128, in main corpus.load_data(limit_train_examples_per_class=args.data_args.train_examples_per_class, File "/home/fp/dev/conditional_classification/conditional_classification/datasets_base.py", line 83, in load_data datasets = load_dataset(self.name, split=dataset_split) File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/load.py", line 609, in load_dataset builder_instance.download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 526, in download_and_prepare self._download_and_prepare( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/builder.py", line 586, in _download_and_prepare verify_checksums( File "/home/fp/anaconda3/envs/conditional/lib/python3.8/site-packages/datasets/utils/info_utils.py", line 39, in verify_checksums raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/uc?export=download&id=0Bz8a_Dbh9QhbQ2Vic1kxMmZZQ1k'] ``` I've seen this has happened before in other datasets as reported in #537. I've tried clearing my cache and call again `load_dataset` but still is not working. My same codebase is successfully downloading and using other datasets (e.g. AGNews) without any problem, so I guess something has happened specifically to the DBPedia dataset in the last few days. Can you please check if there's a problem with the checksums? Or this is related to any other stuff? I've seen that the path in the cache for the dataset is `/home/fp/.cache/huggingface/datasets/d_bpedia14/dbpedia_14/2.0.0/a70413e39e7a716afd0e90c9e53cb053691f56f9ef5fe317bd07f2c368e8e897...` and includes `d_bpedia14` instead maybe of `dbpedia_14`. Was this maybe a bug introduced recently? Thanks!
closed
https://github.com/huggingface/datasets/issues/1907
2021-02-18T22:25:48
2021-02-22T23:22:05
2021-02-22T23:22:04
{ "login": "francisco-perez-sorrosal", "id": 918006, "type": "User" }
[]
false
[]
811,405,274
1,906
Feature Request: Support for Pandas `Categorical`
``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws NotImplementedError # TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_table ``` I'm curious if https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L796 could be built out in a way similar to `Sequence`? e.g. a `Map` class (or whatever name the maintainers might prefer) that can accept: ``` index_type = generate_from_arrow_type(pa_type.index_type) value_type = generate_from_arrow_type(pa_type.value_type) ``` and then additional code points to modify: - FeatureType: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L694 - A branch to handle Map in get_nested_type: https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L719 - I don't quite understand what `encode_nested_example` does but perhaps a branch there? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L755 - Similarly, I don't quite understand why `Sequence` is used this way in `generate_from_dict`, but perhaps a branch here? https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L775 I couldn't find other usages of `Sequence` outside of defining specific datasets, so I'm not sure if that's a comprehensive set of touchpoints.
open
https://github.com/huggingface/datasets/issues/1906
2021-02-18T19:46:05
2021-02-23T14:38:50
null
{ "login": "justin-yan", "id": 7731709, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" }, { "name": "generic discussion", "color": "c5def5" } ]
false
[]
811,384,174
1,905
Standardizing datasets.dtypes
This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here). This moves away from `str(pyarrow.DataType)` as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes. I believe in practice this should be backward compatible, since anyone previously using Value() would only have been able to use dtypes that had an identically named pyarrow factory function, which are all explicitly supported here.
closed
https://github.com/huggingface/datasets/pull/1905
2021-02-18T19:15:31
2021-02-20T22:01:30
2021-02-20T22:01:30
{ "login": "justin-yan", "id": 7731709, "type": "User" }
[]
true
[]
811,260,904
1,904
Fix to_pandas for boolean ArrayXD
As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`. zero copy is available for all primitive types except booleans see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pyarrow.Array.to_numpy and https://issues.apache.org/jira/browse/ARROW-2871?jql=text%20~%20%22boolean%20to_numpy%22 cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/1904
2021-02-18T16:30:46
2021-02-18T17:10:03
2021-02-18T17:10:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
811,145,531
1,903
Initial commit for the addition of TIMIT dataset
Below points needs to be addressed: - Creation of dummy dataset is failing - Need to check on the data representation - License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania Also the links (_except the download_) point to the ami corpus! ;-) @patrickvonplaten Requesting your comments, will be happy to address them!
closed
https://github.com/huggingface/datasets/pull/1903
2021-02-18T14:23:12
2021-03-01T09:39:12
2021-03-01T09:39:12
{ "login": "vrindaprabhu", "id": 16264631, "type": "User" }
[]
true
[]
810,931,171
1,902
Fix setimes_2 wmt urls
Continuation of #1901 Some other urls were missing https
closed
https://github.com/huggingface/datasets/pull/1902
2021-02-18T09:42:26
2021-02-18T09:55:41
2021-02-18T09:55:41
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
810,845,605
1,901
Fix OPUS dataset download errors
Replace http to https. https://github.com/huggingface/datasets/issues/854 https://discuss.huggingface.co/t/cannot-download-wmt16/2081
closed
https://github.com/huggingface/datasets/pull/1901
2021-02-18T07:39:41
2021-02-18T15:07:20
2021-02-18T09:39:21
{ "login": "YangWang92", "id": 3883941, "type": "User" }
[]
true
[]
810,512,488
1,900
Issue #1895: Bugfix for string_to_arrow timestamp[ns] support
Should resolve https://github.com/huggingface/datasets/issues/1895 The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType. While adding unit-testing, I noticed that support for the double/float types also don't invert correctly, so I added them, which I believe would hypothetically make this section of `Value` redundant: ``` def __post_init__(self): if self.dtype == "double": # fix inferred type self.dtype = "float64" if self.dtype == "float": # fix inferred type self.dtype = "float32" ``` However, since I think Value.dtype is part of the public interface, removing that would result in a backward-incompatible change, so I didn't muck with that. The rest of the PR consists of docstrings that I added while developing locally so I could keep track of which functions were supposed to be inverses of each other, and thought I'd include them initially in case you want to keep them around, but I'm happy to delete or remove any of them at your request!
closed
https://github.com/huggingface/datasets/pull/1900
2021-02-17T20:26:04
2021-02-19T18:27:11
2021-02-19T18:27:11
{ "login": "justin-yan", "id": 7731709, "type": "User" }
[]
true
[]
810,308,332
1,899
Fix: ALT - fix duplicated examples in alt-parallel
As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field. This was due to a bad copy of a python dict. This PR fixes that.
closed
https://github.com/huggingface/datasets/pull/1899
2021-02-17T15:53:56
2021-02-17T17:20:49
2021-02-17T17:20:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
810,157,251
1,898
ALT dataset has repeating instances in all splits
The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/ Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits. Would be great if this could be fixed :) Added a snapshot of the contents from `explore-datset` feature, for quick reference. ![image](https://user-images.githubusercontent.com/33179372/108206321-442a2d00-714c-11eb-882f-b4b6e708ef9c.png)
closed
https://github.com/huggingface/datasets/issues/1898
2021-02-17T12:51:42
2021-02-19T06:18:46
2021-02-19T06:18:46
{ "login": "10-zin", "id": 33179372, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
810,113,263
1,897
Fix PandasArrayExtensionArray conversion to native type
To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types. However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because 1. the PandasExtensionArray.isna method was wrong 2. the conversion of a PandasExtensionArray to a numpy array with dtype=object was returning a multidimensional array while pandas excepts a 1D array in this case (more info [here](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.api.extensions.ExtensionArray.html#pandas.api.extensions.ExtensionArray)) I fixed these two issues and now the conversion to native types works, and so is the export to csv. cc @SBrandeis
closed
https://github.com/huggingface/datasets/pull/1897
2021-02-17T11:48:24
2021-02-17T13:15:16
2021-02-17T13:15:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
809,630,271
1,895
Bug Report: timestamp[ns] not recognized
Repro: ``` from datasets import Dataset import pandas as pd import pyarrow df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H")) pyarrow.Table.from_pandas(df) Dataset.from_pandas(df) # Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type. ``` The factory function seems to be just "timestamp": https://arrow.apache.org/docs/python/generated/pyarrow.timestamp.html#pyarrow.timestamp It seems like https://github.com/huggingface/datasets/blob/master/src/datasets/features.py#L36-L43 could have a little bit of additional structure for handling these cases? I'd be happy to take a shot at opening a PR if I could receive some guidance on whether parsing something like `timestamp[ns]` and resolving it to timestamp('ns') is the goal of this method. Alternatively, if I'm using this incorrectly (e.g. is the expectation that we always provide a schema when timestamps are involved?), that would be very helpful to know as well! ``` $ pip list # only the relevant libraries/versions datasets 1.2.1 pandas 1.0.3 pyarrow 3.0.0 ```
closed
https://github.com/huggingface/datasets/issues/1895
2021-02-16T20:38:04
2021-02-19T18:27:11
2021-02-19T18:27:11
{ "login": "justin-yan", "id": 7731709, "type": "User" }
[]
false
[]
809,609,654
1,894
benchmarking against MMapIndexedDataset
I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB of tokens). Questions: 1) Is this (basically identical) performance expected? 2) Is there a scenario where this library will outperform `MMapIndexedDataset`? (maybe more examples/larger examples?) 3) Should I be using different benchmarking tools than `psrecord`/how do you guys do benchmarks? Thanks in advance! Sam
open
https://github.com/huggingface/datasets/issues/1894
2021-02-16T20:04:58
2021-02-17T18:52:28
null
{ "login": "sshleifer", "id": 6045025, "type": "User" }
[]
false
[]
809,556,503
1,893
wmt19 is broken
1. Check which lang pairs we have: `--dataset_name wmt19`: Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de'] 2. OK, let's pick `ru-en`: `--dataset_name wmt19 --dataset_config "ru-en"` no cookies: ``` Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/builder.py", line 628, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt19/436092de5f3faaf0fc28bc84875475b384e90a5470fa6afaee11039ceddc5052/wmt_utils.py", line 755, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 276, in download_and_extract return self.extract(self.download(url_or_urls)) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 233, in map_nested mapped = [ File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 234, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 190, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/py_utils.py", line 172, in _single_map_nested return function(data_struct) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/download_manager.py", line 211, in _download return cached_path(url_or_filename, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-ru.tar.gz ```
closed
https://github.com/huggingface/datasets/issues/1893
2021-02-16T18:39:58
2021-03-03T17:42:02
2021-03-03T17:42:02
{ "login": "stas00", "id": 10676103, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
809,554,174
1,892
request to mirror wmt datasets, as they are really slow to download
Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download. Thank you!
closed
https://github.com/huggingface/datasets/issues/1892
2021-02-16T18:36:11
2021-10-26T06:55:42
2021-03-25T11:53:23
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
false
[]
809,550,001
1,891
suggestion to improve a missing dataset error
I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`: ``` True, predict_with_generate=True) Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 323, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 335, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/utils/file_utils.py", line 584, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./run_seq2seq.py", line 661, in <module> main() File "./run_seq2seq.py", line 317, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 706, in load_dataset module_path, hash, resolved_file_path = prepare_module( File "/mnt/nvme1/code/huggingface/datasets-master/src/datasets/load.py", line 343, in prepare_module raise FileNotFoundError( FileNotFoundError: Couldn't find file locally at wmt20/wmt20.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/master/datasets/wmt20/wmt20.py. The file is also not present on the master branch on github. ``` Suggestion: if it is not in a local path, check that there is an actual `https://github.com/huggingface/datasets/tree/master/datasets/wmt20` first and assert "dataset `wmt20` doesn't exist in datasets", rather than trying to find a load script - since the whole repo is not there. The error occured when running: ``` cd examples/seq2seq export BS=16; rm -r output_dir; PYTHONPATH=../../src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python ./run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_val_samples 500 --dataset_name wmt20 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " ``` Thanks.
closed
https://github.com/huggingface/datasets/issues/1891
2021-02-16T18:29:13
2022-10-05T12:48:38
2022-10-05T12:48:38
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
false
[]
809,395,586
1,890
Reformat dataset cards section titles
Titles are formatted like [Foo](#foo) instead of just Foo
closed
https://github.com/huggingface/datasets/pull/1890
2021-02-16T15:11:47
2021-02-16T15:12:34
2021-02-16T15:12:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
809,276,015
1,889
Implement to_dict and to_pandas for Dataset
With options to return a generator or the full dataset
closed
https://github.com/huggingface/datasets/pull/1889
2021-02-16T12:38:19
2021-02-18T18:42:37
2021-02-18T18:42:34
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
809,241,123
1,888
Docs for adding new column on formatted dataset
As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added Close #1872
closed
https://github.com/huggingface/datasets/pull/1888
2021-02-16T11:45:00
2021-03-30T14:01:03
2021-02-16T11:58:57
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
809,229,809
1,887
Implement to_csv for Dataset
cc @thomwolf `to_csv` supports passing either a file path or a *binary* file object The writing is batched to avoid loading the whole table in memory
closed
https://github.com/huggingface/datasets/pull/1887
2021-02-16T11:27:29
2021-02-19T09:41:59
2021-02-19T09:41:59
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
809,221,885
1,886
Common voice
Started filling out information about the dataset and a dataset card. To do Create tagging file Update the common_voice.py file with more information
closed
https://github.com/huggingface/datasets/pull/1886
2021-02-16T11:16:10
2021-03-09T18:51:31
2021-03-09T18:51:31
{ "login": "BirgerMoell", "id": 1704131, "type": "User" }
[]
true
[]
808,881,501
1,885
add missing info on how to add large files
Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to. @lhoestq
closed
https://github.com/huggingface/datasets/pull/1885
2021-02-15T23:46:39
2021-02-16T16:22:19
2021-02-16T11:44:12
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
true
[]
808,755,894
1,884
dtype fix when using numpy arrays
As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array
closed
https://github.com/huggingface/datasets/pull/1884
2021-02-15T18:55:25
2021-07-30T11:01:18
2021-07-30T11:01:18
{ "login": "bhavitvyamalik", "id": 19718818, "type": "User" }
[]
true
[]
808,750,623
1,883
Add not-in-place implementations for several dataset transforms
Should we deprecate in-place versions of such methods?
closed
https://github.com/huggingface/datasets/pull/1883
2021-02-15T18:44:26
2021-02-24T14:54:49
2021-02-24T14:53:26
{ "login": "SBrandeis", "id": 33657802, "type": "User" }
[]
true
[]
808,716,576
1,882
Create Remote Manager
Refactoring to separate the concern of remote (HTTP/FTP requests) management.
open
https://github.com/huggingface/datasets/pull/1882
2021-02-15T17:36:24
2022-07-06T15:19:47
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
808,578,200
1,881
`list_datasets()` returns a list of strings, not objects
Here and there in the docs there is still stuff like this: ```python >>> datasets_list = list_datasets() >>> print(', '.join(dataset.id for dataset in datasets_list)) ``` However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects.
closed
https://github.com/huggingface/datasets/pull/1881
2021-02-15T14:20:15
2021-02-15T15:09:49
2021-02-15T15:09:48
{ "login": "pminervini", "id": 227357, "type": "User" }
[]
true
[]
808,563,439
1,880
Update multi_woz_v22 checksums
As noticed in #1876 the checksums of this dataset are outdated. I updated them in this PR
closed
https://github.com/huggingface/datasets/pull/1880
2021-02-15T14:00:18
2021-02-15T14:18:19
2021-02-15T14:18:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
808,541,442
1,879
Replace flatten_nested
Replace `flatten_nested` with `NestedDataStructure.flatten`. This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure. Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class. I have also generalized the flattening, and now it handles multiple levels of nesting.
closed
https://github.com/huggingface/datasets/pull/1879
2021-02-15T13:29:40
2021-02-19T18:35:14
2021-02-19T18:35:14
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
808,526,883
1,878
Add LJ Speech dataset
This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/) As requested by #1841 The ASR format is based on #1767 There are a couple of quirks that should be addressed: - I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by paperswithcode). Since the number of speech datasets is about to grow, maybe these categories should be added to the main list? - Similarly to #1767 this dataset uses only a single dummy sample to reduce the zip size (`wav`s are quite heavy). Is there a plan to allow LFS or S3 usage for dummy data in the repo? - The dataset is distributed under the Public Domain license, which is not used anywhere else in the repo, AFAIK. Do you think Public Domain is worth adding to the tagger app as well? Pinging @patrickvonplaten to review
closed
https://github.com/huggingface/datasets/pull/1878
2021-02-15T13:10:42
2021-02-15T19:39:41
2021-02-15T14:18:09
{ "login": "anton-l", "id": 26864830, "type": "User" }
[]
true
[]
808,462,272
1,877
Allow concatenation of both in-memory and on-disk datasets
This is a prerequisite for the addition of the `add_item` feature (see #1870). Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files). This assumption is used for pickling for example: - in-memory dataset can just be pickled/unpickled in-memory - on-disk dataset can be unloaded to only keep the filepaths when pickling, and then reloaded from the disk when unpickling Maybe let's have a design that allows a Dataset to have a Table that can be rebuilt from heterogenous sources like in-memory tables or on-disk tables ? This could also be further extended in the future One idea would be to define a list of sources and each source implements a way to reload its corresponding pyarrow Table. Then the dataset would be the concatenation of all these tables. Depending on the source type, the serialization using pickle would be different. In-memory data would be copied while on-disk data would simply be replaced by the path to these data. If you have some ideas you would like to share about the design/API feel free to do so :) cc @albertvillanova
closed
https://github.com/huggingface/datasets/issues/1877
2021-02-15T11:39:46
2021-03-26T16:51:58
2021-03-26T16:51:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
808,025,859
1,876
load_dataset("multi_woz_v22") NonMatchingChecksumError
Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError. To reproduce: `dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')` This will give the following error: ``` raise NonMatchingChecksumError(error_msg + str(bad_urls)) datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dialog_acts.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_003.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_004.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_005.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_006.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_007.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_008.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_009.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_010.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_012.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_013.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_014.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_015.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_016.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/train/dialogues_017.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/dev/dialogues_002.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_001.json', 'https://github.com/budzianowski/multiwoz/raw/master/data/MultiWOZ_2.2/test/dialogues_002.json'] ```
closed
https://github.com/huggingface/datasets/issues/1876
2021-02-14T19:14:48
2021-08-04T18:08:00
2021-08-04T18:08:00
{ "login": "Vincent950129", "id": 5945326, "type": "User" }
[]
false
[]
807,887,267
1,875
Adding sari metric
Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark.
closed
https://github.com/huggingface/datasets/pull/1875
2021-02-14T04:38:35
2021-02-17T15:56:27
2021-02-17T15:56:27
{ "login": "ddhruvkr", "id": 6061911, "type": "User" }
[]
true
[]
807,786,094
1,874
Adding Europarl Bilingual dataset
Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php). This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some keys that references to inexistent sentences). I chose to follow the the style of a similar dataset available in this repository: `multi_para_crawl`.
closed
https://github.com/huggingface/datasets/pull/1874
2021-02-13T17:02:04
2021-03-04T10:38:22
2021-03-04T10:38:22
{ "login": "lucadiliello", "id": 23355969, "type": "User" }
[]
true
[]
807,750,745
1,873
add iapp_wiki_qa_squad
`iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles. It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset) to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in 5761/742/739 questions from 1529/191/192 articles.
closed
https://github.com/huggingface/datasets/pull/1873
2021-02-13T13:34:27
2021-02-16T14:21:58
2021-02-16T14:21:58
{ "login": "cstorm125", "id": 15519308, "type": "User" }
[]
true
[]
807,711,935
1,872
Adding a new column to the dataset after set_format was called
Hi, thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side. I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True)`. This converts the integer columns into tensors, but keeps the lists of strings as they are. I then call `map` to add a new column to my dataset, which is a **list of strings**. Once I iterate through my dataset, I get an error that the new column can't be converted into a tensor (which is probably caused by `set_format`). Below some pseudo code: ```python def augment_func(sample: Dict) -> Dict: # do something return { "some_integer_column1" : augmented_data["some_integer_column1"], # <-- tensor "some_integer_column2" : augmented_data["some_integer_column2"], # <-- tensor "NEW_COLUMN": targets, # <-- list of strings } data = datasets.load_dataset(__file__, data_dir="...", split="train") data.set_format("torch", columns=["some_integer_column1", "some_integer_column2"], output_all_columns=True) augmented_dataset = data.map(augment_func, batched=False) for sample in augmented_dataset: print(sample) # fails ``` and the exception: ```python Traceback (most recent call last): File "dataset.py", line 487, in <module> main() File "dataset.py", line 471, in main for sample in augmented_dataset: File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 697, in __iter__ yield self._getitem( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1069, in _getitem outputs = self._convert_outputs( File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 890, in _convert_outputs v = map_nested(command, v, **map_nested_kwargs) File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in command return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 850, in <listcomp> return [map_nested(command, i, **map_nested_kwargs) for i in x] File "lib/python3.8/site-packages/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "lib/python3.8/site-packages/datasets/arrow_dataset.py", line 851, in command return torch.tensor(x, **format_kwargs) TypeError: new(): invalid data type 'str' ``` Thanks!
closed
https://github.com/huggingface/datasets/issues/1872
2021-02-13T09:14:35
2021-03-30T14:01:45
2021-03-30T14:01:45
{ "login": "villmow", "id": 2743060, "type": "User" }
[]
false
[]
807,697,671
1,871
Add newspop dataset
closed
https://github.com/huggingface/datasets/pull/1871
2021-02-13T07:31:23
2021-03-08T10:12:45
2021-03-08T10:12:45
{ "login": "frankier", "id": 299380, "type": "User" }
[]
true
[]
807,306,564
1,870
Implement Dataset add_item
Implement `Dataset.add_item`. Close #1854.
closed
https://github.com/huggingface/datasets/pull/1870
2021-02-12T15:03:46
2021-04-23T10:01:31
2021-04-23T10:01:31
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
true
[]
807,159,835
1,869
Remove outdated commands in favor of huggingface-cli
Removing the old user commands since `huggingface_hub` is going to be used instead. cc @julien-c
closed
https://github.com/huggingface/datasets/pull/1869
2021-02-12T11:28:10
2021-02-12T16:13:09
2021-02-12T16:13:08
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
807,138,159
1,868
Update oscar sizes
This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan
closed
https://github.com/huggingface/datasets/pull/1868
2021-02-12T10:55:35
2021-02-12T11:03:07
2021-02-12T11:03:06
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
807,127,181
1,867
ERROR WHEN USING SET_TRANSFORM()
Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797 However, when I try to use Trainer from transformers with such dataset, it throws an error: ``` TypeError: __init__() missing 1 required positional argument: 'transform' [INFO|trainer.py:357] 2021-02-12 10:18:09,893 >> The following columns in the training set don't have a corresponding argument in `AlbertForMaskedLM.forward` and have been ignored: text. Exception in device=TPU:0: __init__() missing 1 required positional argument: 'transform' Traceback (most recent call last): File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 324, in _start_fn fn(gindex, *args) File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 368, in _mp_fn main() File "/home/alejandro_vaca/transformers/examples/language-modeling/run_mlm_wwm.py", line 332, in main data_collator=data_collator, File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 286, in __init__ self._remove_unused_columns(self.train_dataset, description="training") File "/anaconda3/envs/torch-xla-1.7/lib/python3.6/site-packages/transformers/trainer.py", line 359, in _remove_unused_columns dataset.set_format(type=dataset.format["type"], columns=columns) File "/home/alejandro_vaca/datasets/src/datasets/fingerprint.py", line 312, in wrapper out = func(self, *args, **kwargs) File "/home/alejandro_vaca/datasets/src/datasets/arrow_dataset.py", line 818, in set_format _ = get_formatter(type, **format_kwargs) File "/home/alejandro_vaca/datasets/src/datasets/formatting/__init__.py", line 112, in get_formatter return _FORMAT_TYPES[format_type](**format_kwargs) TypeError: __init__() missing 1 required positional argument: 'transform' ``` The code I'm using: ```{python} def tokenize_function(examples): # Remove empty lines examples["text"] = [line for line in examples["text"] if len(line) > 0 and not line.isspace()] return tokenizer(examples["text"], padding=padding, truncation=True, max_length=data_args.max_seq_length) datasets.set_transform(tokenize_function) data_collator = DataCollatorForWholeWordMask(tokenizer=tokenizer, mlm_probability=data_args.mlm_probability) # Initialize our Trainer trainer = Trainer( model=model, args=training_args, train_dataset=datasets["train"] if training_args.do_train else None, eval_dataset=datasets["val"] if training_args.do_eval else None, tokenizer=tokenizer, data_collator=data_collator, ) ``` I've installed from source, master branch.
closed
https://github.com/huggingface/datasets/issues/1867
2021-02-12T10:38:31
2021-03-01T14:04:24
2021-02-24T12:00:43
{ "login": "avacaondata", "id": 35173563, "type": "User" }
[]
false
[]
807,017,816
1,866
Add dataset for Financial PhraseBank
closed
https://github.com/huggingface/datasets/pull/1866
2021-02-12T07:30:56
2021-02-17T14:22:36
2021-02-17T14:22:36
{ "login": "frankier", "id": 299380, "type": "User" }
[]
true
[]
806,388,290
1,865
Updated OPUS Open Subtitles Dataset with metadata information
Close #1844 Problems: - I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be? - Possibly related to the above, I tried doing `pip uninstall datasets && pip install -e ".[dev]"` after the changes, and loading the dataset via `load_dataset("open_subtitles", lang1='hi', lang2='it')` to check if the update worked, but the loaded dataset did not contain the metadata fields (neither in the features nor doing `next(iter(dataset['train']))`). What step(s) did I miss? Questions: - Is it ok to have a `classmethod` in there? I have not seen any in the few other datasets I have checked. I could make it a local method of the `_generate_examples` method, but I'd rather not duplicate the logic...
closed
https://github.com/huggingface/datasets/pull/1865
2021-02-11T13:26:26
2021-02-19T12:38:09
2021-02-12T16:59:44
{ "login": "Valahaar", "id": 19476123, "type": "User" }
[]
true
[]
806,172,843
1,864
Add Winogender Schemas
## Adding a Dataset - **Name:** Winogender Schemas - **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems. - **Paper:** https://arxiv.org/abs/1804.09301 - **Data:** https://github.com/rudinger/winogender-schemas (see data directory) - **Motivation:** Testing gender bias in automated coreference resolution systems, improve coreference resolution in general. Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1864
2021-02-11T08:18:38
2021-02-11T08:19:51
2021-02-11T08:19:51
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
806,171,311
1,863
Add WikiCREM
## Adding a Dataset - **Name:** WikiCREM - **Description:** A large unsupervised corpus for coreference resolution. - **Paper:** https://arxiv.org/abs/1905.06290 - **Github repo:**: https://github.com/vid-koci/bert-commonsense - **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3 - **Motivation:** Coreference resolution, common sense reasoning Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/1863
2021-02-11T08:16:00
2021-03-07T07:27:13
null
{ "login": "NielsRogge", "id": 48327001, "type": "User" }
[ { "name": "dataset request", "color": "e99695" } ]
false
[]
805,722,293
1,862
Fix writing GPU Faiss index
As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU. I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu` Close #1859
closed
https://github.com/huggingface/datasets/pull/1862
2021-02-10T17:32:03
2021-02-10T18:17:48
2021-02-10T18:17:47
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
805,631,215
1,861
Fix Limit url
The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset This PR uses the previous commit sha to download the file instead, as suggested by @Paethon Close #1836
closed
https://github.com/huggingface/datasets/pull/1861
2021-02-10T15:44:56
2021-02-10T16:15:00
2021-02-10T16:14:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
805,510,037
1,860
Add loading from the Datasets Hub + add relative paths in download manager
With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data. For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files. You can load it using ```python from datasets import load_dataset d = load_dataset("lhoestq/custom_squad") ``` To be able to use the data files that live right next to the dataset script on the repo in the hub, I added relative paths support for the DownloadManager. For example in the repo mentioned above, there are two json files that can be downloaded via ```python _URLS = { "train": "train-v1.1.json", "dev": "dev-v1.1.json", } downloaded_files = dl_manager.download_and_extract(_URLS) ``` To make it work, I set the `base_path` of the DownloadManager to be the parent path of the dataset script (which comes from either a local path or a remote url). I also had to add the auth header of the requests to huggingface.co for private datasets repos. The token is fetched from [huggingface_hub](https://github.com/huggingface/huggingface_hub).
closed
https://github.com/huggingface/datasets/pull/1860
2021-02-10T13:24:11
2021-02-12T19:13:30
2021-02-12T19:13:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
805,479,025
1,859
Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU)
Error serializing faiss index. Error as follows: `Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index` Note: `torch.cuda.is_available()` reports: ``` Cuda is available cuda:0 ``` Adding index, device=0 for GPU. `dataset.add_faiss_index(column='embeddings', index_name='idx_embeddings', device=0)` However, during a quick debug, self.faiss_index has no attr "device" when checked in` search.py, method save`, so fails to transform gpu index to cpu index. If I add index without device, index is saved OK. ``` def save(self, file: str): """Serialize the FaissIndex on disk""" import faiss # noqa: F811 if ( hasattr(self.faiss_index, "device") and self.faiss_index.device is not None and self.faiss_index.device > -1 ): index = faiss.index_gpu_to_cpu(self.faiss_index) else: index = self.faiss_index faiss.write_index(index, file) ```
closed
https://github.com/huggingface/datasets/issues/1859
2021-02-10T12:41:00
2021-02-10T18:32:12
2021-02-10T18:17:47
{ "login": "corticalstack", "id": 3995321, "type": "User" }
[]
false
[]
805,477,774
1,858
Clean config getenvs
Following #1848 Remove double getenv calls and fix one issue with rarfile cc @albertvillanova
closed
https://github.com/huggingface/datasets/pull/1858
2021-02-10T12:39:14
2021-02-10T15:52:30
2021-02-10T15:52:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
805,391,107
1,857
Unable to upload "community provided" dataset - 400 Client Error
Hi, i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens: ``` $ datasets-cli login $ datasets-cli upload_dataset my_dataset About to upload file /path/to/my_dataset/dataset_infos.json to S3 under filename my_dataset/dataset_infos.json and namespace username About to upload file /path/to/my_dataset/my_dataset.py to S3 under filename my_dataset/my_dataset.py and namespace username Proceed? [Y/n] Y Uploading... This might take a while if files are large 400 Client Error: Bad Request for url: https://huggingface.co/api/datasets/presign huggingface.co migrated to a new model hosting system. You need to upgrade to transformers v3.5+ to upload new models. More info at https://discuss.hugginface.co or https://twitter.com/julien_c. Thank you! ``` I'm using the latest releases of datasets and transformers.
closed
https://github.com/huggingface/datasets/issues/1857
2021-02-10T10:39:01
2021-08-03T05:06:13
2021-08-03T05:06:13
{ "login": "mwrzalik", "id": 1376337, "type": "User" }
[]
false
[]
805,360,200
1,856
load_dataset("amazon_polarity") NonMatchingChecksumError
Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError. To reproduce: ``` load_dataset("amazon_polarity") ``` This will give the following error: ``` --------------------------------------------------------------------------- NonMatchingChecksumError Traceback (most recent call last) <ipython-input-3-8559a03fe0f8> in <module>() ----> 1 dataset = load_dataset("amazon_polarity") 3 frames /usr/local/lib/python3.6/dist-packages/datasets/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 37 if len(bad_urls) > 0: 38 error_msg = "Checksums didn't match" + for_verification_name + ":\n" ---> 39 raise NonMatchingChecksumError(error_msg + str(bad_urls)) 40 logger.info("All the checksums matched successfully" + for_verification_name) 41 NonMatchingChecksumError: Checksums didn't match for dataset source files: ['https://drive.google.com/u/0/uc?id=0Bz8a_Dbh9QhbaW12WVVZS2drcnM&export=download'] ```
closed
https://github.com/huggingface/datasets/issues/1856
2021-02-10T10:00:56
2022-03-15T13:55:24
2022-03-15T13:55:23
{ "login": "yanxi0830", "id": 19946372, "type": "User" }
[]
false
[]
805,256,579
1,855
Minor fix in the docs
closed
https://github.com/huggingface/datasets/pull/1855
2021-02-10T07:27:43
2021-02-10T12:33:09
2021-02-10T12:33:09
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
805,204,397
1,854
Feature Request: Dataset.add_item
I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.map(binarizer)`. Is this possible at the moment? Is there an example? I'm happy to use raw `pa.Table` but not sure whether it will support uneven length entries. ### Desired API ```python import numpy as np tokenized: List[np.NDArray[np.int64]] = [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5]) def build_dataset_from_tokenized(tokenized: List[np.NDArray[int]]) -> Dataset: """FIXME""" dataset = EmptyDataset() for t in tokenized: dataset.append(t) return dataset ds = build_dataset_from_tokenized(tokenized) assert (ds[0] == np.array([4,4,2])).all() ``` ### What I tried grep, google for "add one entry at a time", "datasets.append" ### Current Code This code achieves the same result but doesn't fit into the `add_item` abstraction. ```python dataset = load_dataset('text', data_files={'train': 'train.txt'}) tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base', max_length=4096) def tokenize_function(examples): ids = tokenizer(examples['text'], return_attention_mask=False)['input_ids'] return {'input_ids': [x[1:] for x in ids]} ds = dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=['text'], load_from_cache_file=not overwrite_cache) print(ds['train'][0]) => np array ``` Thanks in advance!
closed
https://github.com/huggingface/datasets/issues/1854
2021-02-10T06:06:00
2021-04-23T10:01:30
2021-04-23T10:01:30
{ "login": "sshleifer", "id": 6045025, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
804,791,166
1,853
Configure library root logger at the module level
Configure library root logger at the datasets.logging module level (singleton-like). By doing it this way: - we are sure configuration is done only once: module level code is only runned once - no need of global variable - no need of threading lock
closed
https://github.com/huggingface/datasets/pull/1853
2021-02-09T18:11:12
2021-02-10T12:32:34
2021-02-10T12:32:34
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
804,633,033
1,852
Add Arabic Speech Corpus
closed
https://github.com/huggingface/datasets/pull/1852
2021-02-09T15:02:26
2021-02-11T10:18:55
2021-02-11T10:18:55
{ "login": "zaidalyafeai", "id": 15667714, "type": "User" }
[]
true
[]
804,523,174
1,851
set bert_score version dependency
Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843)
closed
https://github.com/huggingface/datasets/pull/1851
2021-02-09T12:51:07
2021-02-09T14:21:48
2021-02-09T14:21:48
{ "login": "pvl", "id": 3596, "type": "User" }
[]
true
[]
804,412,249
1,850
Add cord 19 dataset
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIGS class attribute is filled with the different configurations of the dataset and that the BUILDER_CONFIG_CLASS is specified if there is a custom config class. - [x] Generate the metadata file dataset_infos.json for all configurations - [x] Generate the dummy data dummy_data.zip files to have the dataset script tested and that they don't weigh too much (<50KB) - [x] Add the dataset card README.md using the template and at least fill the tags - [x] Both tests for the real data and the dummy data pass. ### Extras: - [x] add more metadata - [x] add full text - [x] add pre-computed document embedding
closed
https://github.com/huggingface/datasets/pull/1850
2021-02-09T10:22:08
2021-02-09T15:16:26
2021-02-09T15:16:26
{ "login": "ggdupont", "id": 5583410, "type": "User" }
[]
true
[]
804,292,971
1,849
Add TIMIT
## Adding a Dataset - **Name:** *TIMIT* - **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ / *Wikipedia*: https://en.wikipedia.org/wiki/TIMIT - **Data:** *https://deepai.org/dataset/timit* - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1849
2021-02-09T07:29:41
2021-03-15T05:59:37
2021-03-15T05:59:37
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
803,826,506
1,848
Refactoring: Create config module
Refactorize configuration settings into their own module. This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created.
closed
https://github.com/huggingface/datasets/pull/1848
2021-02-08T18:43:51
2021-02-10T12:29:35
2021-02-10T12:29:35
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
803,824,694
1,847
[Metrics] Add word error metric metric
This PR adds the word error rate metric to datasets. WER: https://en.wikipedia.org/wiki/Word_error_rate for speech recognition. WER is the main metric used in ASR. `jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939)
closed
https://github.com/huggingface/datasets/pull/1847
2021-02-08T18:41:15
2021-02-09T17:53:21
2021-02-09T17:53:21
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[]
true
[]
803,806,380
1,846
Make DownloadManager downloaded/extracted paths accessible
Make accessible the file paths downloaded/extracted by DownloadManager. Close #1831. The approach: - I set these paths as DownloadManager attributes: these are DownloadManager's concerns - To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition
closed
https://github.com/huggingface/datasets/pull/1846
2021-02-08T18:14:42
2021-02-25T14:10:18
2021-02-25T14:10:18
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[]
true
[]
803,714,493
1,845
Enable logging propagation and remove logging handler
We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691 But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826 I also removed the handler that was added since, according to the logging [documentation](https://docs.python.org/3/howto/logging.html#configuring-logging-for-a-library): > It is strongly advised that you do not add any handlers other than NullHandler to your library’s loggers. This is because the configuration of handlers is the prerogative of the application developer who uses your library. The application developer knows their target audience and what handlers are most appropriate for their application: if you add handlers ‘under the hood’, you might well interfere with their ability to carry out unit tests and deliver logs which suit their requirements. It could have been useful if we wanted to have a custom formatter for the logging but I think it's more important to keep the logging as default to not interfere with the users' logging management. Therefore I also removed the two methods `datasets.logging.enable_default_handler` and `datasets.logging.disable_default_handler`. cc @albertvillanova this should let you use capsys/caplog in pytest cc @LysandreJik @sgugger if you want to do the same in `transformers`
closed
https://github.com/huggingface/datasets/pull/1845
2021-02-08T16:22:13
2021-02-09T14:22:38
2021-02-09T14:22:37
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
803,588,125
1,844
Update Open Subtitles corpus with original sentence IDs
Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles). I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat allowing for document-level machine translation (and other document-level stuff which could be cool to have); second, it's possible to have parallel sentences in multiple languages, as they share the same ids across bitexts. I think I should tag @abhishekkrthakur as he's the one who added it in the first place. Thanks!
closed
https://github.com/huggingface/datasets/issues/1844
2021-02-08T13:55:13
2021-02-12T17:38:58
2021-02-12T17:38:58
{ "login": "Valahaar", "id": 19476123, "type": "User" }
[ { "name": "good first issue", "color": "7057ff" } ]
false
[]
803,565,393
1,843
MustC Speech Translation
## Adding a Dataset - **Name:** *IWSLT19* - **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.* - **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - **Data:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation* - all data under "Allowed Training Data" and "Development and Evalutaion Data for TED/How2" - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/1843
2021-02-08T13:27:45
2021-05-14T14:53:34
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
803,563,149
1,842
Add AMI Corpus
## Adding a Dataset - **Name:** *AMI* - **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elicited using a scenario in which the participants play different roles in a design team, taking a design project from kick-off to completion over the course of a day. The rest consists of naturally occurring meetings in a range of domains. Detailed information can be found in the documentation section.* - **Paper:** *Homepage*: http://groups.inf.ed.ac.uk/ami/corpus/ - **Data:** *http://groups.inf.ed.ac.uk/ami/download/* - Select all cases in 1) and select "Individual Headsets" & "Microphone array" for 2) - **Motivation:** Important speech dataset If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1842
2021-02-08T13:25:00
2023-02-28T16:29:22
2023-02-28T16:29:22
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
803,561,123
1,841
Add ljspeech
## Adding a Dataset - **Name:** *ljspeech* - **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of approximately 24 hours. The texts were published between 1884 and 1964, and are in the public domain. The audio was recorded in 2016-17 by the LibriVox project and is also in the public domain.)* - **Paper:** *Homepage*: https://keithito.com/LJ-Speech-Dataset/ - **Data:** *https://keithito.com/LJ-Speech-Dataset/* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/ljspeech If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1841
2021-02-08T13:22:26
2021-03-15T05:59:02
2021-03-15T05:59:02
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
803,560,039
1,840
Add common voice
## Adding a Dataset - **Name:** *common voice* - **Description:** *Mozilla Common Voice Dataset* - **Paper:** Homepage: https://voice.mozilla.org/en/datasets - **Data:** https://voice.mozilla.org/en/datasets - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/common_voice If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
closed
https://github.com/huggingface/datasets/issues/1840
2021-02-08T13:21:05
2022-03-20T15:23:40
2021-03-15T05:56:21
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]
803,559,164
1,839
Add Voxforge
## Adding a Dataset - **Name:** *voxforge* - **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constantly updated, and for the sake of reproducibility, this release contains only recordings submitted prior to 2020-01-01. The samples are splitted between train, validation and testing so that samples from each speaker belongs to exactly one split.* - **Paper:** *Homepage*: http://www.voxforge.org/ - **Data:** *http://www.voxforge.org/home/downloads* - **Motivation:** Important speech dataset - **TFDatasets Implementation**: https://www.tensorflow.org/datasets/catalog/voxforge If interested in tackling this issue, feel free to tag @patrickvonplaten Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
open
https://github.com/huggingface/datasets/issues/1839
2021-02-08T13:19:56
2021-02-08T13:28:31
null
{ "login": "patrickvonplaten", "id": 23423619, "type": "User" }
[ { "name": "dataset request", "color": "e99695" }, { "name": "speech", "color": "d93f0b" } ]
false
[]