id
int64
599M
3.29B
url
stringlengths
58
61
html_url
stringlengths
46
51
number
int64
1
7.72k
title
stringlengths
1
290
state
stringclasses
2 values
comments
int64
0
70
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-08-05 09:28:51
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-08-05 11:39:56
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-08-01 05:15:45
user_login
stringlengths
3
26
labels
listlengths
0
4
body
stringlengths
0
228k
is_pull_request
bool
2 classes
668,037,965
https://api.github.com/repos/huggingface/datasets/issues/455
https://github.com/huggingface/datasets/pull/455
455
Add bleurt
closed
4
2020-07-29T18:08:32
2020-07-31T13:56:14
2020-07-31T13:56:14
yjernite
[]
This PR adds the BLEURT metric to the library. The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`. Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend usi...
true
668,011,577
https://api.github.com/repos/huggingface/datasets/issues/454
https://github.com/huggingface/datasets/pull/454
454
Create SECURITY.md
closed
0
2020-07-29T17:23:34
2020-07-29T21:45:52
2020-07-29T21:45:52
ChenZehong13
[]
true
667,728,247
https://api.github.com/repos/huggingface/datasets/issues/453
https://github.com/huggingface/datasets/pull/453
453
add builder tests
closed
0
2020-07-29T10:22:07
2020-07-29T11:14:06
2020-07-29T11:14:05
lhoestq
[]
I added `as_dataset` and `download_and_prepare` to the tests
true
667,498,295
https://api.github.com/repos/huggingface/datasets/issues/452
https://github.com/huggingface/datasets/pull/452
452
Guardian authorship dataset
closed
6
2020-07-29T02:23:57
2020-08-20T15:09:57
2020-08-20T15:07:56
malikaltakrori
[]
A new dataset: Guardian news articles for authorship attribution **tests passed:** python nlp-cli dummy_data datasets/guardian_authorship --save_infos --all_configs RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_guardian_authorship **Tests failed:** Real data:...
true
667,210,468
https://api.github.com/repos/huggingface/datasets/issues/451
https://github.com/huggingface/datasets/pull/451
451
Fix csv/json/txt cache dir
closed
4
2020-07-28T16:30:51
2020-07-29T13:57:23
2020-07-29T13:57:22
lhoestq
[]
The cache dir for csv/json/txt datasets was always the same. This is an issue because it should be different depending on the data files provided by the user. To fix that, I added a line that use the hash of the data files provided by the user to define the cache dir. This should fix #444
true
667,074,120
https://api.github.com/repos/huggingface/datasets/issues/450
https://github.com/huggingface/datasets/pull/450
450
add sogou_news
closed
0
2020-07-28T13:29:10
2020-07-29T13:30:18
2020-07-29T13:30:17
mariamabarham
[]
This PR adds the sogou news dataset #353
true
666,898,923
https://api.github.com/repos/huggingface/datasets/issues/449
https://github.com/huggingface/datasets/pull/449
449
add reuters21578 dataset
closed
3
2020-07-28T08:58:12
2023-09-24T09:49:28
2020-08-03T11:10:31
mariamabarham
[]
This PR adds the `Reuters_21578` dataset https://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html #353 The datasets is a lit of `.sgm` files which are a bit different from xml file indeed `xml.etree` couldn't be used to read files. I consider them as text file (to avoid using external library) and read ...
true
666,893,443
https://api.github.com/repos/huggingface/datasets/issues/448
https://github.com/huggingface/datasets/pull/448
448
add aws load metric test
closed
3
2020-07-28T08:50:22
2020-07-28T15:02:27
2020-07-28T15:02:27
idoh
[]
Following issue #445 Added a test to recognize import errors of all metrics
true
666,842,115
https://api.github.com/repos/huggingface/datasets/issues/447
https://github.com/huggingface/datasets/pull/447
447
[BugFix] fix wrong import of DEFAULT_TOKENIZER
closed
0
2020-07-28T07:41:10
2020-07-28T12:58:01
2020-07-28T12:52:05
idoh
[]
Fixed the path to `DEFAULT_TOKENIZER` #445
true
666,837,351
https://api.github.com/repos/huggingface/datasets/issues/446
https://github.com/huggingface/datasets/pull/446
446
[BugFix] fix wrong import of DEFAULT_TOKENIZER
closed
0
2020-07-28T07:32:47
2020-07-28T07:34:46
2020-07-28T07:33:59
idoh
[]
Fixed the path to `DEFAULT_TOKENIZER` #445
true
666,836,658
https://api.github.com/repos/huggingface/datasets/issues/445
https://github.com/huggingface/datasets/issues/445
445
DEFAULT_TOKENIZER import error in sacrebleu
closed
1
2020-07-28T07:31:30
2020-07-28T12:58:56
2020-07-28T12:58:56
idoh
[]
Latest Version 0.3.0 When loading the metric "sacrebleu" there is an import error due to the wrong path ![image](https://user-images.githubusercontent.com/5303103/88633063-2c5e5f00-d0bd-11ea-8ca8-4704dc975433.png)
false
666,280,842
https://api.github.com/repos/huggingface/datasets/issues/444
https://github.com/huggingface/datasets/issues/444
444
Keep loading old file even I specify a new file in load_dataset
closed
2
2020-07-27T13:08:06
2020-07-29T13:57:22
2020-07-29T13:57:22
joshhu
[ "dataset bug" ]
I used load a file called 'a.csv' by ``` dataset = load_dataset('csv', data_file='./a.csv') ``` And after a while, I tried to load another csv called 'b.csv' ``` dataset = load_dataset('csv', data_file='./b.csv') ``` However, the new dataset seems to remain the old 'a.csv' and not loading new csv file. Even...
false
666,246,716
https://api.github.com/repos/huggingface/datasets/issues/443
https://github.com/huggingface/datasets/issues/443
443
Cannot unpickle saved .pt dataset with torch.save()/load()
closed
1
2020-07-27T12:13:37
2020-07-27T13:05:11
2020-07-27T13:05:11
vegarab
[]
Saving a formatted torch dataset to file using `torch.save()`. Loading the same file fails during unpickling: ```python >>> import torch >>> import nlp >>> squad = nlp.load_dataset("squad.py", split="train") >>> squad Dataset(features: {'source_text': Value(dtype='string', id=None), 'target_text': Value(dtype...
false
666,201,810
https://api.github.com/repos/huggingface/datasets/issues/442
https://github.com/huggingface/datasets/issues/442
442
[Suggestion] Glue Diagnostic Data with Labels
open
0
2020-07-27T10:59:58
2020-08-24T15:13:20
null
ggbetz
[ "Dataset discussion" ]
Hello! First of all, thanks for setting up this useful project! I've just realised you provide the the [Glue Diagnostics Data](https://huggingface.co/nlp/viewer/?dataset=glue&config=ax) without labels, indicating in the `GlueConfig` that you've only a test set. Yet, the data with labels is available, too (see als...
false
666,148,413
https://api.github.com/repos/huggingface/datasets/issues/441
https://github.com/huggingface/datasets/pull/441
441
Add features parameter in load dataset
closed
2
2020-07-27T09:50:01
2020-07-30T12:51:17
2020-07-30T12:51:16
lhoestq
[]
Added `features` argument in `nlp.load_dataset`. If they don't match the data type, it raises a `ValueError`. It's a draft PR because #440 needs to be merged first.
true
666,116,823
https://api.github.com/repos/huggingface/datasets/issues/440
https://github.com/huggingface/datasets/pull/440
440
Fix user specified features in map
closed
0
2020-07-27T09:04:26
2020-07-28T09:25:23
2020-07-28T09:25:22
lhoestq
[]
`.map` didn't keep the user specified features because of an issue in the writer. The writer used to overwrite the user specified features with inferred features. I also added tests to make sure it doesn't happen again.
true
665,964,673
https://api.github.com/repos/huggingface/datasets/issues/439
https://github.com/huggingface/datasets/issues/439
439
Issues: Adding a FAISS or Elastic Search index to a Dataset
closed
5
2020-07-27T04:25:17
2020-10-28T01:46:24
2020-10-28T01:46:24
nsankar
[]
It seems the DPRContextEncoder, DPRContextEncoderTokenizer cited[ in this documentation](https://huggingface.co/nlp/faiss_and_ea.html) is not implemented ? It didnot work with the standard nlp installation . Also, I couldn't find or use it with the latest nlp install from github in Colab. Is there any dependency on t...
false
665,865,490
https://api.github.com/repos/huggingface/datasets/issues/438
https://github.com/huggingface/datasets/issues/438
438
New Datasets: IWSLT15+, ITTB
open
2
2020-07-26T21:43:04
2020-08-24T15:12:15
null
sshleifer
[ "dataset request" ]
**Links:** [iwslt](https://pytorchnlp.readthedocs.io/en/latest/_modules/torchnlp/datasets/iwslt.html) Don't know if that link is up to date. [ittb](http://www.cfilt.iitb.ac.in/iitb_parallel/) **Motivation**: replicate mbart finetuning results (table below) ![image](https://user-images.githubusercontent.com/60450...
false
665,597,176
https://api.github.com/repos/huggingface/datasets/issues/437
https://github.com/huggingface/datasets/pull/437
437
Fix XTREME PAN-X loading
closed
4
2020-07-25T14:44:57
2020-07-30T08:28:15
2020-07-30T08:28:15
lvwerra
[]
Hi 🤗 In response to the discussion in #425 @lewtun and I made some fixes to the repo. In the original XTREME implementation the PAN-X dataset for named entity recognition loaded each word/tag pair as a single row and the sentence relation was lost. With the fix each row contains the list of all words in a single sen...
true
665,582,167
https://api.github.com/repos/huggingface/datasets/issues/436
https://github.com/huggingface/datasets/issues/436
436
Google Colab - load_dataset - PyArrow exception
closed
9
2020-07-25T13:05:20
2020-08-20T08:08:18
2020-08-20T08:08:18
nsankar
[]
With latest PyArrow 1.0.0 installed, I get the following exception . Restarting colab has the same issue ImportWarning: To use `nlp`, the module `pyarrow>=0.16.0` is required, and the current version of `pyarrow` doesn't match this condition. If you are running this in a Google Colab, you should probably just rest...
false
665,507,141
https://api.github.com/repos/huggingface/datasets/issues/435
https://github.com/huggingface/datasets/issues/435
435
ImportWarning for pyarrow 1.0.0
closed
4
2020-07-25T03:44:39
2020-09-08T17:57:15
2020-08-03T16:37:32
HanGuo97
[]
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
false
665,477,638
https://api.github.com/repos/huggingface/datasets/issues/434
https://github.com/huggingface/datasets/pull/434
434
Fixed check for pyarrow
closed
1
2020-07-25T00:16:53
2020-07-25T06:36:34
2020-07-25T06:36:34
nadahlberg
[]
Fix check for pyarrow in __init__.py. Previously would raise an error for pyarrow >= 1.0.0
true
665,311,025
https://api.github.com/repos/huggingface/datasets/issues/433
https://github.com/huggingface/datasets/issues/433
433
How to reuse functionality of a (generic) dataset?
closed
4
2020-07-24T17:27:37
2022-10-04T17:59:34
2022-10-04T17:59:33
ArneBinder
[]
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
false
665,234,340
https://api.github.com/repos/huggingface/datasets/issues/432
https://github.com/huggingface/datasets/pull/432
432
Fix handling of config files while loading datasets from multiple processes
closed
4
2020-07-24T15:10:57
2020-08-01T17:11:42
2020-07-30T08:25:28
orsharir
[]
When loading shards on several processes, each process upon loading the dataset will overwrite dataset_infos.json in <package path>/datasets/<dataset name>/<hash>/dataset_infos.json. It does so every time, even when the target file already exists and is identical. Because multiple processes rewrite the same file in par...
true
665,044,416
https://api.github.com/repos/huggingface/datasets/issues/431
https://github.com/huggingface/datasets/pull/431
431
Specify split post processing + Add post processing resources downloading
closed
4
2020-07-24T09:29:19
2020-07-31T09:05:04
2020-07-31T09:05:03
lhoestq
[]
Previously if you tried to do ```python from nlp import load_dataset wiki = load_dataset("wiki_dpr", "psgs_w100_with_nq_embeddings", split="train[:100]", with_index=True) ``` Then you'd get an error `Index size should match Dataset size...` This was because it was trying to use the full index (21M elements). ...
true
664,583,837
https://api.github.com/repos/huggingface/datasets/issues/430
https://github.com/huggingface/datasets/pull/430
430
add DatasetDict
closed
4
2020-07-23T15:43:49
2020-08-04T01:01:53
2020-07-29T09:06:22
lhoestq
[]
## Add DatasetDict ### Overview When you call `load_dataset` it can return a dictionary of datasets if there are several splits (train/test for example). If you wanted to apply dataset transforms you had to iterate over each split and apply the transform. Instead of returning a dict, it now returns a `nlp.Dat...
true
664,412,137
https://api.github.com/repos/huggingface/datasets/issues/429
https://github.com/huggingface/datasets/pull/429
429
mlsum
closed
6
2020-07-23T11:52:39
2020-07-31T11:46:20
2020-07-31T11:46:20
RachelKer
[]
Hello, The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https...
true
664,367,086
https://api.github.com/repos/huggingface/datasets/issues/428
https://github.com/huggingface/datasets/pull/428
428
fix concatenate_datasets
closed
0
2020-07-23T10:30:59
2020-07-23T10:35:00
2020-07-23T10:34:58
lhoestq
[]
`concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423
true
664,341,623
https://api.github.com/repos/huggingface/datasets/issues/427
https://github.com/huggingface/datasets/pull/427
427
Allow sequence features for beam + add processed Natural Questions
closed
0
2020-07-23T09:52:41
2020-07-23T13:09:30
2020-07-23T13:09:29
lhoestq
[]
## Allow Sequence features for Beam Datasets + add Natural Questions ### The issue The steps of beam datasets processing is the following: - download the source files and send them in a remote storage (gcs) - process the files using a beam runner (dataflow) - save output in remote storage (gcs) - convert outp...
true
664,203,897
https://api.github.com/repos/huggingface/datasets/issues/426
https://github.com/huggingface/datasets/issues/426
426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
closed
6
2020-07-23T05:00:41
2021-03-12T09:34:12
2020-09-07T14:48:04
timothyjlaurent
[ "enhancement" ]
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
false
664,029,848
https://api.github.com/repos/huggingface/datasets/issues/425
https://github.com/huggingface/datasets/issues/425
425
Correct data structure for PAN-X task in XTREME dataset?
closed
7
2020-07-22T20:29:20
2020-08-02T13:30:34
2020-08-02T13:30:34
lewtun
[]
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
false
663,858,552
https://api.github.com/repos/huggingface/datasets/issues/424
https://github.com/huggingface/datasets/pull/424
424
Web of science
closed
0
2020-07-22T15:38:31
2020-07-23T14:27:58
2020-07-23T14:27:56
mariamabarham
[]
this PR adds the WebofScience dataset #353
true
663,079,359
https://api.github.com/repos/huggingface/datasets/issues/423
https://github.com/huggingface/datasets/pull/423
423
Change features vs schema logic
closed
2
2020-07-21T14:52:47
2020-07-25T09:08:34
2020-07-23T10:15:17
lhoestq
[]
## New logic for `nlp.Features` in datasets Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`. However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files. Changes: - Remove `sche...
true
663,028,497
https://api.github.com/repos/huggingface/datasets/issues/422
https://github.com/huggingface/datasets/pull/422
422
- Corrected encoding for IMDB.
closed
0
2020-07-21T13:46:59
2020-07-22T16:02:53
2020-07-22T16:02:53
ghazi-f
[]
The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset.
true
662,213,864
https://api.github.com/repos/huggingface/datasets/issues/421
https://github.com/huggingface/datasets/pull/421
421
Style change
closed
3
2020-07-20T20:08:29
2020-07-22T16:08:40
2020-07-22T16:08:39
lordtt13
[]
make quality and make style ran on scripts
true
662,029,782
https://api.github.com/repos/huggingface/datasets/issues/420
https://github.com/huggingface/datasets/pull/420
420
Better handle nested features
closed
0
2020-07-20T16:44:13
2020-07-21T08:20:49
2020-07-21T08:09:52
lhoestq
[]
Changes: - added arrow schema to features conversion (it's going to be useful to fix #342 ) - make flatten handle deep features (useful for tfrecords conversion in #339 ) - add tests for flatten and features conversions - the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies)
true
661,974,747
https://api.github.com/repos/huggingface/datasets/issues/419
https://github.com/huggingface/datasets/pull/419
419
EmoContext dataset add
closed
0
2020-07-20T15:48:45
2020-07-24T08:22:01
2020-07-24T08:22:00
lordtt13
[]
EmoContext Dataset add Signed-off-by: lordtt13 <thakurtanmay72@yahoo.com>
true
661,914,873
https://api.github.com/repos/huggingface/datasets/issues/418
https://github.com/huggingface/datasets/issues/418
418
Addition of google drive links to dl_manager
closed
3
2020-07-20T14:52:02
2020-07-20T15:39:32
2020-07-20T15:39:32
lordtt13
[]
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
false
661,804,054
https://api.github.com/repos/huggingface/datasets/issues/417
https://github.com/huggingface/datasets/pull/417
417
Fix docstrins multiple metrics instances
closed
0
2020-07-20T13:08:59
2020-07-22T09:51:00
2020-07-22T09:50:59
lhoestq
[]
We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated). This should fix #304
true
661,635,393
https://api.github.com/repos/huggingface/datasets/issues/416
https://github.com/huggingface/datasets/pull/416
416
Fix xtreme panx directory
closed
1
2020-07-20T10:09:17
2020-07-21T08:15:46
2020-07-21T08:15:44
lhoestq
[]
Fix #412
true
660,687,076
https://api.github.com/repos/huggingface/datasets/issues/415
https://github.com/huggingface/datasets/issues/415
415
Something is wrong with WMT 19 kk-en dataset
open
0
2020-07-19T08:18:51
2020-07-20T09:54:26
null
ChenghaoMou
[ "dataset bug" ]
The translation in the `train` set does not look right: ``` >>>import nlp >>>from nlp import load_dataset >>>dataset = load_dataset('wmt19', 'kk-en') >>>dataset["train"]["translation"][0] {'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'} >>>dataset["validation"]["translation"][0] {'kk': 'Ақша-несие...
false
660,654,013
https://api.github.com/repos/huggingface/datasets/issues/414
https://github.com/huggingface/datasets/issues/414
414
from_dict delete?
closed
2
2020-07-19T07:08:36
2020-07-21T02:21:17
2020-07-21T02:21:17
hackerxiaobai
[]
AttributeError: type object 'Dataset' has no attribute 'from_dict'
false
660,063,655
https://api.github.com/repos/huggingface/datasets/issues/413
https://github.com/huggingface/datasets/issues/413
413
Is there a way to download only NQ dev?
closed
3
2020-07-18T10:28:23
2022-02-11T09:50:21
2022-02-11T09:50:21
tholor
[]
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
false
660,047,139
https://api.github.com/repos/huggingface/datasets/issues/412
https://github.com/huggingface/datasets/issues/412
412
Unable to load XTREME dataset from disk
closed
3
2020-07-18T09:55:00
2020-07-21T08:15:44
2020-07-21T08:15:44
lewtun
[]
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
false
659,393,398
https://api.github.com/repos/huggingface/datasets/issues/411
https://github.com/huggingface/datasets/pull/411
411
Sbf
closed
0
2020-07-17T16:19:45
2020-07-21T09:13:46
2020-07-21T09:13:45
mariamabarham
[]
This PR adds the Social Bias Frames Dataset (ACL 2020) . dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/
true
659,242,871
https://api.github.com/repos/huggingface/datasets/issues/410
https://github.com/huggingface/datasets/pull/410
410
20newsgroup
closed
0
2020-07-17T13:07:57
2020-07-20T07:05:29
2020-07-20T07:05:28
mariamabarham
[]
Add 20Newsgroup dataset. #353
true
659,128,611
https://api.github.com/repos/huggingface/datasets/issues/409
https://github.com/huggingface/datasets/issues/409
409
train_test_split error: 'dict' object has no attribute 'deepcopy'
closed
2
2020-07-17T10:36:28
2020-07-21T14:34:52
2020-07-21T14:34:52
morganmcg1
[]
`train_test_split` is giving me an error when I try and call it: `'dict' object has no attribute 'deepcopy'` ## To reproduce ``` dataset = load_dataset('glue', 'mrpc', split='train') dataset = dataset.train_test_split(test_size=0.2) ``` ## Full Stacktrace ``` -------------------------------------------...
false
659,064,144
https://api.github.com/repos/huggingface/datasets/issues/408
https://github.com/huggingface/datasets/pull/408
408
Add tests datasets gcp
closed
0
2020-07-17T09:23:27
2020-07-17T09:26:57
2020-07-17T09:26:56
lhoestq
[]
Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data. These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo. This should avoid future issues like #407
true
658,672,736
https://api.github.com/repos/huggingface/datasets/issues/407
https://github.com/huggingface/datasets/issues/407
407
MissingBeamOptions for Wikipedia 20200501.en
closed
4
2020-07-16T23:48:03
2021-01-12T11:41:16
2020-07-17T14:24:28
mitchellgordon95
[]
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
false
658,581,764
https://api.github.com/repos/huggingface/datasets/issues/406
https://github.com/huggingface/datasets/issues/406
406
Faster Shuffling?
closed
7
2020-07-16T21:21:53
2023-08-16T09:52:39
2020-09-07T14:45:25
mitchellgordon95
[]
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
false
658,580,192
https://api.github.com/repos/huggingface/datasets/issues/405
https://github.com/huggingface/datasets/pull/405
405
Make select() faster by batching reads
closed
0
2020-07-16T21:19:45
2020-07-17T17:05:44
2020-07-17T16:51:26
mitchellgordon95
[]
Here's a benchmark: ``` dataset = nlp.load_dataset('bookcorpus', split='train') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False) end = time.time() print(f'{end - start}') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1000, load_fr...
true
658,400,987
https://api.github.com/repos/huggingface/datasets/issues/404
https://github.com/huggingface/datasets/pull/404
404
Add seed in metrics
closed
0
2020-07-16T17:27:05
2020-07-20T10:12:35
2020-07-20T10:12:34
lhoestq
[]
With #361 we noticed that some metrics were not deterministic. In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`. The seed is set only when `compute` is called, and reset afterwards. Moreover when calling `compute` with the same metric instance (i.e. same experiment...
true
658,325,756
https://api.github.com/repos/huggingface/datasets/issues/403
https://github.com/huggingface/datasets/pull/403
403
return python objects instead of arrays by default
closed
0
2020-07-16T15:51:52
2020-07-17T11:37:01
2020-07-17T11:37:00
lhoestq
[]
We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists. I fixed it by using to_pydict/to_pylist instead. Fix #387 It was mentioned in https://github.com/huggingface/transformers/issues/5729
true
658,001,288
https://api.github.com/repos/huggingface/datasets/issues/402
https://github.com/huggingface/datasets/pull/402
402
Search qa
closed
0
2020-07-16T09:00:10
2020-07-16T14:27:00
2020-07-16T14:26:59
mariamabarham
[]
add SearchQA dataset #336
true
657,996,252
https://api.github.com/repos/huggingface/datasets/issues/401
https://github.com/huggingface/datasets/pull/401
401
add web_questions
closed
3
2020-07-16T08:54:59
2020-08-06T06:16:20
2020-08-06T06:16:19
mariamabarham
[]
add Web Question dataset #336 Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken
true
657,975,600
https://api.github.com/repos/huggingface/datasets/issues/400
https://github.com/huggingface/datasets/pull/400
400
Web questions
closed
0
2020-07-16T08:28:29
2020-07-16T08:50:51
2020-07-16T08:42:54
mariamabarham
[]
add the WebQuestion dataset #336
true
657,841,433
https://api.github.com/repos/huggingface/datasets/issues/399
https://github.com/huggingface/datasets/pull/399
399
Spelling mistake
closed
1
2020-07-16T04:37:58
2020-07-16T06:49:48
2020-07-16T06:49:37
BlancRay
[]
In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr".
true
657,511,962
https://api.github.com/repos/huggingface/datasets/issues/398
https://github.com/huggingface/datasets/pull/398
398
Add inline links
closed
2
2020-07-15T17:04:04
2020-07-22T10:14:22
2020-07-22T10:14:22
bharatr21
[]
Add inline links to `Contributing.md`
true
657,510,856
https://api.github.com/repos/huggingface/datasets/issues/397
https://github.com/huggingface/datasets/pull/397
397
Add contiguous sharding
closed
0
2020-07-15T17:02:58
2020-07-17T16:59:31
2020-07-17T16:59:31
jarednielsen
[]
This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing. Usage: ``` nlp.concatenate_datas...
true
657,477,952
https://api.github.com/repos/huggingface/datasets/issues/396
https://github.com/huggingface/datasets/pull/396
396
Fix memory issue when doing select
closed
0
2020-07-15T16:15:04
2020-07-16T08:07:32
2020-07-16T08:07:31
lhoestq
[]
We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name. Fix #395
true
657,454,983
https://api.github.com/repos/huggingface/datasets/issues/395
https://github.com/huggingface/datasets/issues/395
395
Memory issue when doing select
closed
1
2020-07-15T15:43:38
2020-07-16T08:07:31
2020-07-16T08:07:31
lhoestq
[]
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that ...
false
657,425,548
https://api.github.com/repos/huggingface/datasets/issues/394
https://github.com/huggingface/datasets/pull/394
394
Remove remaining nested dict
closed
0
2020-07-15T15:05:52
2020-07-16T07:39:52
2020-07-16T07:39:51
mariamabarham
[]
This PR deletes the remaining unnecessary nested dict #378
true
657,330,911
https://api.github.com/repos/huggingface/datasets/issues/393
https://github.com/huggingface/datasets/pull/393
393
Fix extracted files directory for the DownloadManager
closed
0
2020-07-15T12:59:55
2020-07-17T17:02:16
2020-07-17T17:02:14
lhoestq
[]
The cache dir was often cluttered by extracted files because of the download manager. For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to ca...
true
657,313,738
https://api.github.com/repos/huggingface/datasets/issues/392
https://github.com/huggingface/datasets/pull/392
392
Style change detection
closed
0
2020-07-15T12:32:14
2020-07-21T13:18:36
2020-07-17T17:13:23
ghomasHudson
[]
Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents. - There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels...
true
656,956,384
https://api.github.com/repos/huggingface/datasets/issues/390
https://github.com/huggingface/datasets/pull/390
390
Concatenate datasets
closed
6
2020-07-14T23:24:37
2020-07-22T09:49:58
2020-07-22T09:49:58
jarednielsen
[]
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema. This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in...
true
656,921,768
https://api.github.com/repos/huggingface/datasets/issues/389
https://github.com/huggingface/datasets/pull/389
389
Fix pickling of SplitDict
closed
11
2020-07-14T21:53:39
2020-08-04T14:38:10
2020-08-04T14:38:10
mitchellgordon95
[]
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example: ``` wiki = nlp.load_dataset('wikipedia', split='train') def sentencize(examples): ... wiki = wiki.map(sentencize, batched=True) torch.save(wiki, '...
true
656,707,497
https://api.github.com/repos/huggingface/datasets/issues/388
https://github.com/huggingface/datasets/issues/388
388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
closed
5
2020-07-14T15:36:41
2022-10-04T18:01:28
2022-10-04T18:01:28
SamuelCahyawijaya
[ "dataset bug" ]
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not ob...
false
656,361,357
https://api.github.com/repos/huggingface/datasets/issues/387
https://github.com/huggingface/datasets/issues/387
387
Conversion through to_pandas output numpy arrays for lists instead of python objects
closed
1
2020-07-14T06:24:01
2020-07-17T11:37:00
2020-07-17T11:37:00
thomwolf
[]
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting hi...
false
655,839,067
https://api.github.com/repos/huggingface/datasets/issues/386
https://github.com/huggingface/datasets/pull/386
386
Update dataset loading and features - Add TREC dataset
closed
1
2020-07-13T13:10:18
2020-07-16T08:17:58
2020-07-16T08:17:58
thomwolf
[]
This PR: - add a template for a new dataset script - update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is ...
true
655,663,997
https://api.github.com/repos/huggingface/datasets/issues/385
https://github.com/huggingface/datasets/pull/385
385
Remove unnecessary nested dict
closed
5
2020-07-13T08:46:23
2020-07-15T11:27:38
2020-07-15T10:03:53
mariamabarham
[]
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
true
655,291,201
https://api.github.com/repos/huggingface/datasets/issues/383
https://github.com/huggingface/datasets/pull/383
383
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
closed
5
2020-07-11T22:35:20
2020-07-16T16:19:46
2020-07-16T16:19:46
gaguilar
[]
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets t...
true
655,290,482
https://api.github.com/repos/huggingface/datasets/issues/382
https://github.com/huggingface/datasets/issues/382
382
1080
closed
0
2020-07-11T22:29:07
2020-07-11T22:49:38
2020-07-11T22:49:38
saq194
[]
false
655,277,119
https://api.github.com/repos/huggingface/datasets/issues/381
https://github.com/huggingface/datasets/issues/381
381
NLp
closed
0
2020-07-11T20:50:14
2020-07-11T20:50:39
2020-07-11T20:50:39
Spartanthor
[]
false
655,226,316
https://api.github.com/repos/huggingface/datasets/issues/378
https://github.com/huggingface/datasets/issues/378
378
[dataset] Structure of MLQA seems unecessary nested
closed
2
2020-07-11T15:16:08
2020-07-15T16:17:20
2020-07-15T16:17:20
thomwolf
[]
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python ...
false
655,215,790
https://api.github.com/repos/huggingface/datasets/issues/377
https://github.com/huggingface/datasets/issues/377
377
Iyy!!!
closed
0
2020-07-11T14:11:07
2020-07-11T14:30:51
2020-07-11T14:30:51
ajinomoh
[]
false
655,047,826
https://api.github.com/repos/huggingface/datasets/issues/376
https://github.com/huggingface/datasets/issues/376
376
to_pandas conversion doesn't always work
closed
2
2020-07-10T21:33:31
2022-10-04T18:05:39
2022-10-04T18:05:39
thomwolf
[]
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0....
false
655,023,307
https://api.github.com/repos/huggingface/datasets/issues/375
https://github.com/huggingface/datasets/issues/375
375
TypeError when computing bertscore
closed
2
2020-07-10T20:37:44
2022-06-01T15:15:59
2022-06-01T15:15:59
willywsm1013
[]
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most rece...
false
654,895,066
https://api.github.com/repos/huggingface/datasets/issues/374
https://github.com/huggingface/datasets/pull/374
374
Add dataset post processing for faiss indexes
closed
2
2020-07-10T16:25:59
2020-07-13T13:44:03
2020-07-13T13:44:01
lhoestq
[]
# Post processing of datasets for faiss indexes Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries. ## Implementation proposition - Faiss indexes have to be added to the `nlp....
true
654,845,133
https://api.github.com/repos/huggingface/datasets/issues/373
https://github.com/huggingface/datasets/issues/373
373
Segmentation fault when loading local JSON dataset as of #372
closed
11
2020-07-10T15:04:25
2022-10-04T18:05:47
2022-10-04T18:05:47
vegarab
[]
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
false
654,774,420
https://api.github.com/repos/huggingface/datasets/issues/372
https://github.com/huggingface/datasets/pull/372
372
Make the json script more flexible
closed
0
2020-07-10T13:15:15
2020-07-10T14:52:07
2020-07-10T14:52:06
thomwolf
[]
Fix https://github.com/huggingface/nlp/issues/359 Fix https://github.com/huggingface/nlp/issues/369 JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file). In t...
true
654,668,242
https://api.github.com/repos/huggingface/datasets/issues/371
https://github.com/huggingface/datasets/pull/371
371
Fix cached file path for metrics with different config names
closed
1
2020-07-10T10:02:24
2020-07-10T13:45:22
2020-07-10T13:45:20
lhoestq
[]
The config name was not taken into account to build the cached file path. It should fix #368
true
654,304,193
https://api.github.com/repos/huggingface/datasets/issues/370
https://github.com/huggingface/datasets/pull/370
370
Allow indexing Dataset via np.ndarray
closed
1
2020-07-09T19:43:15
2020-07-10T14:05:44
2020-07-10T14:05:43
jarednielsen
[]
true
654,186,890
https://api.github.com/repos/huggingface/datasets/issues/369
https://github.com/huggingface/datasets/issues/369
369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
closed
2
2020-07-09T16:16:53
2020-12-15T23:07:22
2020-07-10T14:52:06
vegarab
[ "dataset bug" ]
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/...
false
654,087,251
https://api.github.com/repos/huggingface/datasets/issues/368
https://github.com/huggingface/datasets/issues/368
368
load_metric can't acquire lock anymore
closed
1
2020-07-09T14:04:09
2020-07-10T13:45:20
2020-07-10T13:45:20
ydshieh
[]
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/n...
false
654,012,984
https://api.github.com/repos/huggingface/datasets/issues/367
https://github.com/huggingface/datasets/pull/367
367
Update Xtreme to add PAWS-X es
closed
0
2020-07-09T12:14:37
2020-07-09T12:37:11
2020-07-09T12:37:10
mariamabarham
[]
This PR adds the `PAWS-X.es` in the Xtreme dataset #362
true
653,954,896
https://api.github.com/repos/huggingface/datasets/issues/366
https://github.com/huggingface/datasets/pull/366
366
Add quora dataset
closed
2
2020-07-09T10:34:22
2020-07-13T17:35:21
2020-07-13T17:35:21
ghomasHudson
[]
Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Implementation Notes: - I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test sp...
true
653,845,964
https://api.github.com/repos/huggingface/datasets/issues/365
https://github.com/huggingface/datasets/issues/365
365
How to augment data ?
closed
6
2020-07-09T07:52:37
2020-07-10T09:12:07
2020-07-10T08:22:15
astariul
[]
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
false
653,821,597
https://api.github.com/repos/huggingface/datasets/issues/364
https://github.com/huggingface/datasets/pull/364
364
add MS MARCO dataset
closed
7
2020-07-09T07:11:19
2020-08-06T06:15:49
2020-08-06T06:15:48
mariamabarham
[]
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including: - Passage and Document Retrieval - Keyphrase Extraction - QA and NLG This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pd...
true
653,821,172
https://api.github.com/repos/huggingface/datasets/issues/363
https://github.com/huggingface/datasets/pull/363
363
Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
closed
23
2020-07-09T07:10:30
2020-08-24T09:59:35
2020-08-24T09:59:35
eltoto1219
[]
nlp/features.py: The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datas...
true
653,766,245
https://api.github.com/repos/huggingface/datasets/issues/362
https://github.com/huggingface/datasets/issues/362
362
[dateset subset missing] xtreme paws-x
closed
1
2020-07-09T05:04:54
2020-07-09T12:38:42
2020-07-09T12:38:42
cosmeowpawlitan
[]
I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error It turns out that the subset for Spanish is missing https://github.com/google-research-datasets/paws/tree/master/pawsx
false
653,757,376
https://api.github.com/repos/huggingface/datasets/issues/361
https://github.com/huggingface/datasets/issues/361
361
🐛 [Metrics] ROUGE is non-deterministic
closed
8
2020-07-09T04:39:37
2022-09-09T15:20:55
2020-07-20T23:48:37
astariul
[]
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe...
false
653,687,176
https://api.github.com/repos/huggingface/datasets/issues/360
https://github.com/huggingface/datasets/issues/360
360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
closed
2
2020-07-09T01:04:43
2020-07-09T19:31:51
2020-07-09T19:31:51
jarednielsen
[]
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t...
false
653,656,279
https://api.github.com/repos/huggingface/datasets/issues/359
https://github.com/huggingface/datasets/issues/359
359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
closed
4
2020-07-08T23:24:05
2020-07-10T14:52:06
2020-07-10T14:52:06
timothyjlaurent
[]
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
false
653,645,121
https://api.github.com/repos/huggingface/datasets/issues/358
https://github.com/huggingface/datasets/pull/358
358
Starting to add some real doc
closed
1
2020-07-08T22:53:03
2020-07-14T09:58:17
2020-07-14T09:58:15
thomwolf
[]
Adding a lot of documentation for: - load a dataset - explore the dataset object - process data with the dataset - add a new dataset script - share a dataset script - full package reference This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.htm...
true
653,642,292
https://api.github.com/repos/huggingface/datasets/issues/357
https://github.com/huggingface/datasets/pull/357
357
Add hashes to cnn_dailymail
closed
2
2020-07-08T22:45:21
2020-07-13T14:16:38
2020-07-13T14:16:38
jbragg
[]
The URL hashes are helpful for comparing results from other sources.
true
653,537,388
https://api.github.com/repos/huggingface/datasets/issues/356
https://github.com/huggingface/datasets/pull/356
356
Add text dataset
closed
0
2020-07-08T19:21:53
2020-07-10T14:19:03
2020-07-10T14:19:03
jarednielsen
[]
Usage: ```python from nlp import load_dataset dset = load_dataset("text", data_files="/path/to/file.txt")["train"] ``` I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes ```bash RUN_SLOW=1 pytest tests/test_dataset_common...
true
653,451,013
https://api.github.com/repos/huggingface/datasets/issues/355
https://github.com/huggingface/datasets/issues/355
355
can't load SNLI dataset
closed
3
2020-07-08T16:54:14
2020-07-18T05:15:57
2020-07-15T07:59:01
jxmorris12
[]
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` ...
false
653,357,617
https://api.github.com/repos/huggingface/datasets/issues/354
https://github.com/huggingface/datasets/pull/354
354
More faiss control
closed
1
2020-07-08T14:45:20
2020-07-09T09:54:54
2020-07-09T09:54:51
lhoestq
[]
Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples
true
653,250,611
https://api.github.com/repos/huggingface/datasets/issues/353
https://github.com/huggingface/datasets/issues/353
353
[Dataset requests] New datasets for Text Classification
open
12
2020-07-08T12:17:58
2025-04-05T09:28:15
null
thomwolf
[ "help wanted", "dataset request" ]
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - #386 - [x] Yelp-5 - #...
false
653,128,883
https://api.github.com/repos/huggingface/datasets/issues/352
https://github.com/huggingface/datasets/pull/352
352
🐛[BugFix]fix seqeval
closed
7
2020-07-08T09:12:12
2020-07-16T08:26:46
2020-07-16T08:26:46
AlongWY
[]
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
true