id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
664,412,137
429
mlsum
Hello, The tests for the load_real_data fail, as there is no default language subset to download it looks for a file that does not exist. This bug does not happen when using the load_dataset function, as it asks you to specify a language if you do not, so I submit this PR anyway. The dataset is avalaible on : https://gitlab.lip6.fr/scialom/mlsum_data
closed
https://github.com/huggingface/datasets/pull/429
2020-07-23T11:52:39
2020-07-31T11:46:20
2020-07-31T11:46:20
{ "login": "RachelKer", "id": 36986299, "type": "User" }
[]
true
[]
664,367,086
428
fix concatenate_datasets
`concatenate_datatsets` used to test that the different`nlp.Dataset.schema` match, but this attribute was removed in #423
closed
https://github.com/huggingface/datasets/pull/428
2020-07-23T10:30:59
2020-07-23T10:35:00
2020-07-23T10:34:58
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
664,341,623
427
Allow sequence features for beam + add processed Natural Questions
## Allow Sequence features for Beam Datasets + add Natural Questions ### The issue The steps of beam datasets processing is the following: - download the source files and send them in a remote storage (gcs) - process the files using a beam runner (dataflow) - save output in remote storage (gcs) - convert output to arrow in remote storage (gcs) However it wasn't possible to process `natural_questions` because apache beam's processing outputs parquet files, and it's not yet possible to read parquet files with list features. ### The proposed solution To allow sequence features for beam I added a workaround that serializes the values using `json.dumps`, so that we end up with strings instead of the original features. Then when the arrow file is created, the serialized objects are transformed back to normal with `json.loads`. Not sure if there's a better way to do it. ### Natural Questions I was able to process NQ with it, and so I added the json infos file in this PR too. The processed arrow files are also stored in gcs. It allows you to load NQ with ```python from nlp import load_dataset nq = load_dataset("natural_questions") # download the 90GB arrow files from gcs and return the dataset ``` ### Tests I added a test case to make sure it works as expected. Note that the CI will fail because I am updating `natural_questions.py`: it's not synced with the script on S3. It will be synced as soon as this PR is merged. ``` =========================== short test summary info ============================ FAILED tests/test_hf_gcp.py::TestDatasetOnHfGcp::test_script_synced_with_s3_natural_questions/default ```
closed
https://github.com/huggingface/datasets/pull/427
2020-07-23T09:52:41
2020-07-23T13:09:30
2020-07-23T13:09:29
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
664,203,897
426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
closed
https://github.com/huggingface/datasets/issues/426
2020-07-23T05:00:41
2021-03-12T09:34:12
2020-09-07T14:48:04
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
664,029,848
425
Correct data structure for PAN-X task in XTREME dataset?
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['train'] ``` However, I am not sure that `load_dataset()` is returning the correct data structure for NER. Currently, every row in `dataset_train` is of the form ```python {'word': str, 'ner_tag': str, 'lang': str} ``` but I think we actually want something like ```python {'words': List[str], 'ner_tags': List[str], 'langs': List[str]} ``` so that each row corresponds to a _sequence_ of words associated with each example. With the current data structure I do not think it is possible to transform `dataset_train` into a form suitable for training because we do not know the boundaries between examples. Indeed, [this line](https://github.com/google-research/xtreme/blob/522434d1aece34131d997a97ce7e9242a51a688a/third_party/utils_tag.py#L58) in the XTREME repo, processes the texts as lists of sentences, tags, and languages. ## Proposed solution Replace ```python with open(filepath) as f: data = csv.reader(f, delimiter="\t", quoting=csv.QUOTE_NONE) for id_, row in enumerate(data): if row: lang, word = row[0].split(":")[0], row[0].split(":")[1] tag = row[1] yield id_, {"word": word, "ner_tag": tag, "lang": lang} ``` from [these lines](https://github.com/huggingface/nlp/blob/ce7d3a1d630b78fe27188d1706f3ea980e8eec43/datasets/xtreme/xtreme.py#L881-L887) of the `_generate_examples()` function with something like ```python guid_index = 1 with open(filepath, encoding="utf-8") as f: words = [] ner_tags = [] langs = [] for line in f: if line.startswith("-DOCSTART-") or line == "" or line == "\n": if words: yield guid_index, {"words": words, "ner_tags": ner_tags, "langs": langs} guid_index += 1 words = [] ner_tags = [] else: # pan-x data is tab separated splits = line.split("\t") # strip out en: prefix langs.append(splits[0][:2]) words.append(splits[0][3:]) if len(splits) > 1: labels.append(splits[-1].replace("\n", "")) else: # examples have no label in test set labels.append("O") ``` If you agree, me or @lvwerra would be happy to implement this and create a PR.
closed
https://github.com/huggingface/datasets/issues/425
2020-07-22T20:29:20
2020-08-02T13:30:34
2020-08-02T13:30:34
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
false
[]
663,858,552
424
Web of science
this PR adds the WebofScience dataset #353
closed
https://github.com/huggingface/datasets/pull/424
2020-07-22T15:38:31
2020-07-23T14:27:58
2020-07-23T14:27:56
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
663,079,359
423
Change features vs schema logic
## New logic for `nlp.Features` in datasets Previously, it was confusing to have `features` and pyarrow's `schema` in `nlp.Dataset`. However `features` is supposed to be the front-facing object to define the different fields of a dataset, while `schema` is only used to write arrow files. Changes: - Remove `schema` field in `nlp.Dataset` - Make `features` the source of truth to read/write examples - `features` can no longer be `None` in `nlp.Dataset` - Update `features` after each dataset transform such as `nlp.Dataset.map` Todo: change the tests to take these changes into account
closed
https://github.com/huggingface/datasets/pull/423
2020-07-21T14:52:47
2020-07-25T09:08:34
2020-07-23T10:15:17
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
663,028,497
422
- Corrected encoding for IMDB.
The preparation phase (after the download phase) crashed on windows because of charmap encoding not being able to decode certain characters. This change suggested in Issue #347 fixes it for the IMDB dataset.
closed
https://github.com/huggingface/datasets/pull/422
2020-07-21T13:46:59
2020-07-22T16:02:53
2020-07-22T16:02:53
{ "login": "ghazi-f", "id": 25091538, "type": "User" }
[]
true
[]
662,213,864
421
Style change
make quality and make style ran on scripts
closed
https://github.com/huggingface/datasets/pull/421
2020-07-20T20:08:29
2020-07-22T16:08:40
2020-07-22T16:08:39
{ "login": "lordtt13", "id": 35500534, "type": "User" }
[]
true
[]
662,029,782
420
Better handle nested features
Changes: - added arrow schema to features conversion (it's going to be useful to fix #342 ) - make flatten handle deep features (useful for tfrecords conversion in #339 ) - add tests for flatten and features conversions - the reader now returns the kwargs to instantiate a Dataset (fix circular dependencies)
closed
https://github.com/huggingface/datasets/pull/420
2020-07-20T16:44:13
2020-07-21T08:20:49
2020-07-21T08:09:52
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
661,974,747
419
EmoContext dataset add
EmoContext Dataset add Signed-off-by: lordtt13 <thakurtanmay72@yahoo.com>
closed
https://github.com/huggingface/datasets/pull/419
2020-07-20T15:48:45
2020-07-24T08:22:01
2020-07-24T08:22:00
{ "login": "lordtt13", "id": 35500534, "type": "User" }
[]
true
[]
661,914,873
418
Addition of google drive links to dl_manager
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig for SQUAD.""" def __init__(self, **kwargs): """BuilderConfig for EmoContext. Args: **kwargs: keyword arguments forwarded to super. """ super(EmoConfig, self).__init__(**kwargs) _TEST_URL = "https://drive.google.com/file/d/1Hn5ytHSSoGOC4sjm3wYy0Dh0oY_oXBbb/view?usp=sharing" _TRAIN_URL = "https://drive.google.com/file/d/12Uz59TYg_NtxOy7SXraYeXPMRT7oaO7X/view?usp=sharing" class EmoDataset(nlp.GeneratorBasedBuilder): """ SemEval-2019 Task 3: EmoContext Contextual Emotion Detection in Text. Version 1.0.0 """ VERSION = nlp.Version("1.0.0") force = False def _info(self): return nlp.DatasetInfo( description=_DESCRIPTION, features=nlp.Features( { "text": nlp.Value("string"), "label": nlp.features.ClassLabel(names=["others", "happy", "sad", "angry"]), } ), supervised_keys=None, homepage="https://www.aclweb.org/anthology/S19-2005/", citation=_CITATION, ) def _get_drive_url(self, url): base_url = 'https://drive.google.com/uc?id=' split_url = url.split('/') return base_url + split_url[5] def _split_generators(self, dl_manager): """Returns SplitGenerators.""" if(not os.path.exists("emo-train.json") or self.force): gdown.download(self._get_drive_url(_TRAIN_URL), "emo-train.json", quiet = True) if(not os.path.exists("emo-test.json") or self.force): gdown.download(self._get_drive_url(_TEST_URL), "emo-test.json", quiet = True) return [ nlp.SplitGenerator( name=nlp.Split.TRAIN, gen_kwargs={ "filepath": "emo-train.json", "split": "train", }, ), nlp.SplitGenerator( name=nlp.Split.TEST, gen_kwargs={"filepath": "emo-test.json", "split": "test"}, ), ] def _generate_examples(self, filepath, split): """ Yields examples. """ with open(filepath, 'rb') as f: data = json.load(f) for id_, text, label in zip(data["text"].keys(), data["text"].values(), data["Label"].values()): yield id_, { "text": text, "label": label, } ``` Can someone help me in adding gdrive links to be used with default dl_manager or adding gdown as another dl_manager, because I'd like to add this dataset to nlp's official database.
closed
https://github.com/huggingface/datasets/issues/418
2020-07-20T14:52:02
2020-07-20T15:39:32
2020-07-20T15:39:32
{ "login": "lordtt13", "id": 35500534, "type": "User" }
[]
false
[]
661,804,054
417
Fix docstrins multiple metrics instances
We change the docstrings of `nlp.Metric.compute`, `nlp.Metric.add` and `nlp.Metric.add_batch` depending on which metric is instantiated. However we had issues when instantiating multiple metrics (docstrings were duplicated). This should fix #304
closed
https://github.com/huggingface/datasets/pull/417
2020-07-20T13:08:59
2020-07-22T09:51:00
2020-07-22T09:50:59
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
661,635,393
416
Fix xtreme panx directory
Fix #412
closed
https://github.com/huggingface/datasets/pull/416
2020-07-20T10:09:17
2020-07-21T08:15:46
2020-07-21T08:15:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
660,687,076
415
Something is wrong with WMT 19 kk-en dataset
The translation in the `train` set does not look right: ``` >>>import nlp >>>from nlp import load_dataset >>>dataset = load_dataset('wmt19', 'kk-en') >>>dataset["train"]["translation"][0] {'kk': 'Trumpian Uncertainty', 'en': 'Трамптық белгісіздік'} >>>dataset["validation"]["translation"][0] {'kk': 'Ақша-несие саясатының сценарийін қайта жазсақ', 'en': 'Rewriting the Monetary-Policy Script'} ```
open
https://github.com/huggingface/datasets/issues/415
2020-07-19T08:18:51
2020-07-20T09:54:26
null
{ "login": "ChenghaoMou", "id": 32014649, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
660,654,013
414
from_dict delete?
AttributeError: type object 'Dataset' has no attribute 'from_dict'
closed
https://github.com/huggingface/datasets/issues/414
2020-07-19T07:08:36
2020-07-21T02:21:17
2020-07-21T02:21:17
{ "login": "hackerxiaobai", "id": 22817243, "type": "User" }
[]
false
[]
660,063,655
413
Is there a way to download only NQ dev?
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", beam_runner="DirectRunner") ``` But this still triggered a big download of presumably the whole dataset. Is there any way of doing this or are splits / slicing options only available after downloading? Thanks!
closed
https://github.com/huggingface/datasets/issues/413
2020-07-18T10:28:23
2022-02-11T09:50:21
2022-02-11T09:50:21
{ "login": "tholor", "id": 1563902, "type": "User" }
[]
false
[]
660,047,139
412
Unable to load XTREME dataset from disk
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPhotos.zip` file from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) and am running into a `FileNotFoundError` when I point to the location of the dataset. As far as I can tell, the problem is that `AmazonPhotos.zip` decompresses to `panx_dataset` and `load_dataset()` is not looking in the correct path: ``` # path where load_dataset is looking for fr.tar.gz /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/ # path where it actually exists /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/panx_dataset/ ``` ## Steps to reproduce the problem 1. Manually download the XTREME benchmark from [here](https://www.amazon.com/clouddrive/share/d3KGCRCIYwhKJF0H3eWA26hjg2ZCRhjpEQtDL70FSBN?_encoding=UTF8&%2AVersion%2A=1&%2Aentries%2A=0&mgh=1) 2. Run the following code snippet ```python from nlp import load_dataset # AmazonPhotos.zip is in the root of the folder dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./') ``` 3. Here is the stack trace ``` --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-4-26786bb5fa93> in <module> ----> 1 dataset = load_dataset("xtreme", "PAN-X.fr", data_dir='./') /usr/local/lib/python3.6/dist-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 /usr/local/lib/python3.6/dist-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info /usr/local/lib/python3.6/dist-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 464 split_dict = SplitDict(dataset_name=self.name) 465 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 466 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 467 # Checksums verification 468 if verify_infos: /usr/local/lib/python3.6/dist-packages/nlp/datasets/xtreme/b8c2ed3583a7a7ac60b503576dfed3271ac86757628897e945bd329c43b8a746/xtreme.py in _split_generators(self, dl_manager) 725 panx_dl_dir = dl_manager.extract(panx_path) 726 lang = self.config.name.split(".")[1] --> 727 lang_folder = dl_manager.extract(os.path.join(panx_dl_dir, lang + ".tar.gz")) 728 return [ 729 nlp.SplitGenerator( /usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in extract(self, path_or_paths) 196 """ 197 return map_nested( --> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths, 199 ) 200 /usr/local/lib/python3.6/dist-packages/nlp/utils/py_utils.py in map_nested(function, data_struct, dict_only, map_tuple) 170 return tuple(mapped) 171 # Singleton --> 172 return function(data_struct) 173 174 /usr/local/lib/python3.6/dist-packages/nlp/utils/download_manager.py in <lambda>(path) 196 """ 197 return map_nested( --> 198 lambda path: cached_path(path, extract_compressed_file=True, force_extract=False), path_or_paths, 199 ) 200 /usr/local/lib/python3.6/dist-packages/nlp/utils/file_utils.py in cached_path(url_or_filename, download_config, **download_kwargs) 203 elif urlparse(url_or_filename).scheme == "": 204 # File, but it doesn't exist. --> 205 raise FileNotFoundError("Local file {} doesn't exist".format(url_or_filename)) 206 else: 207 # Something unknown FileNotFoundError: Local file /root/.cache/huggingface/datasets/9b8c4f1578e45cb2539332c79738beb3b54afbcd842b079cabfd79e3ed6704f6/fr.tar.gz doesn't exist ``` ## OS and hardware ``` - `nlp` version: 0.3.0 - Platform: Linux-4.15.0-72-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.1.0 (True) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```
closed
https://github.com/huggingface/datasets/issues/412
2020-07-18T09:55:00
2020-07-21T08:15:44
2020-07-21T08:15:44
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
false
[]
659,393,398
411
Sbf
This PR adds the Social Bias Frames Dataset (ACL 2020) . dataset homepage: https://homes.cs.washington.edu/~msap/social-bias-frames/
closed
https://github.com/huggingface/datasets/pull/411
2020-07-17T16:19:45
2020-07-21T09:13:46
2020-07-21T09:13:45
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
659,242,871
410
20newsgroup
Add 20Newsgroup dataset. #353
closed
https://github.com/huggingface/datasets/pull/410
2020-07-17T13:07:57
2020-07-20T07:05:29
2020-07-20T07:05:28
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
659,128,611
409
train_test_split error: 'dict' object has no attribute 'deepcopy'
`train_test_split` is giving me an error when I try and call it: `'dict' object has no attribute 'deepcopy'` ## To reproduce ``` dataset = load_dataset('glue', 'mrpc', split='train') dataset = dataset.train_test_split(test_size=0.2) ``` ## Full Stacktrace ``` --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-12-feb740dbec9a> in <module> 1 dataset = load_dataset('glue', 'mrpc', split='train') ----> 2 dataset = dataset.train_test_split(test_size=0.2) ~/anaconda3/envs/fastai2_me/lib/python3.7/site-packages/nlp/arrow_dataset.py in train_test_split(self, test_size, train_size, shuffle, seed, generator, keep_in_memory, load_from_cache_file, train_cache_file_name, test_cache_file_name, writer_batch_size) 1032 "writer_batch_size": writer_batch_size, 1033 } -> 1034 train_kwargs = cache_kwargs.deepcopy() 1035 train_kwargs["split"] = "train" 1036 test_kwargs = cache_kwargs.deepcopy() AttributeError: 'dict' object has no attribute 'deepcopy' ```
closed
https://github.com/huggingface/datasets/issues/409
2020-07-17T10:36:28
2020-07-21T14:34:52
2020-07-21T14:34:52
{ "login": "morganmcg1", "id": 20516801, "type": "User" }
[]
false
[]
659,064,144
408
Add tests datasets gcp
Some datasets are available on our google cloud storage in arrow format, so that the users don't need to process the data. These tests make sure that they're always available. It also makes sure that their scripts are in sync between S3 and the repo. This should avoid future issues like #407
closed
https://github.com/huggingface/datasets/pull/408
2020-07-17T09:23:27
2020-07-17T09:26:57
2020-07-17T09:26:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
658,672,736
407
MissingBeamOptions for Wikipedia 20200501.en
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, total: 34.06 GiB) to /home/hltcoe/mgordon/.cache/huggingface/datasets/wikipedia/20200501.en/1.0.0/76b0b2747b679bb0ee7a1621e50e5a6378477add0c662668a324a5bc07d516dd... Traceback (most recent call last): File "scripts/download.py", line 11, in <module> fire.Fire(download_pretrain) File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 138, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 468, in _Fire target=component.__name__) File "/home/hltcoe/mgordon/.conda/envs/huggingface/lib/python3.6/site-packages/fire/core.py", line 672, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "scripts/download.py", line 6, in download_pretrain nlp.load_dataset('wikipedia', "20200501.en", split='train') File "/exp/mgordon/nlp/src/nlp/load.py", line 534, in load_dataset save_infos=save_infos, File "/exp/mgordon/nlp/src/nlp/builder.py", line 460, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/exp/mgordon/nlp/src/nlp/builder.py", line 870, in _download_and_prepare "\n\t`{}`".format(usage_example) nlp.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, S park, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.en', beam_runner='DirectRunner')` ```
closed
https://github.com/huggingface/datasets/issues/407
2020-07-16T23:48:03
2021-01-12T11:41:16
2020-07-17T14:24:28
{ "login": "mitchellgordon95", "id": 7490438, "type": "User" }
[]
false
[]
658,581,764
406
Faster Shuffling?
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`. But I can also just write the lines to a text file: ``` batch_size = 100000 with open('tmp.txt', 'w+') as out_f: for i in tqdm(range(0, len(dataset), batch_size)): batch = dataset[i:i+batch_size]['text'] print("\n".join(batch), file=out_f) ``` Which completes in a couple minutes, followed by `shuf tmp.txt > tmp2.txt` which completes in under a minute. And finally, ``` dataset = nlp.load_dataset('text', data_files='tmp2.txt') ``` Which completes in under 10 minutes. I read up on Apache Arrow this morning, and it seems like the columnar data format is not especially well-suited to shuffling rows, since moving items around requires a lot of book-keeping. Is shuffle inherently slow, or am I just using it wrong? And if it is slow, would it make sense to try converting the data to a row-based format on disk and then shuffling? (Instead of calling select with a random permutation, as is currently done.)
closed
https://github.com/huggingface/datasets/issues/406
2020-07-16T21:21:53
2023-08-16T09:52:39
2020-09-07T14:45:25
{ "login": "mitchellgordon95", "id": 7490438, "type": "User" }
[]
false
[]
658,580,192
405
Make select() faster by batching reads
Here's a benchmark: ``` dataset = nlp.load_dataset('bookcorpus', split='train') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1, load_from_cache_file=False) end = time.time() print(f'{end - start}') start = time.time() dataset.select(np.arange(1000), reader_batch_size=1000, load_from_cache_file=False) end = time.time() print(f'{end - start}') ``` Without batching, select takes around 1.27 seconds. With batching, it takes around 0.01 seconds. The slowness was upsetting me because dataset.shuffle() was supposed to take ~27 hours for bookcorpus. Now with the fix it takes ~2.5 hours (which still is pretty slow, but I'll open a separate issue for that).
closed
https://github.com/huggingface/datasets/pull/405
2020-07-16T21:19:45
2020-07-17T17:05:44
2020-07-17T16:51:26
{ "login": "mitchellgordon95", "id": 7490438, "type": "User" }
[]
true
[]
658,400,987
404
Add seed in metrics
With #361 we noticed that some metrics were not deterministic. In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`. The seed is set only when `compute` is called, and reset afterwards. Moreover when calling `compute` with the same metric instance (i.e. same experiment_id), the metric will always return the same results given the same inputs. This is the case even if the seed is was not specified by the user, as the previous seed is going to be reused. However, instantiating twice a metric (two different experiments) without specifying a seed can create different results.
closed
https://github.com/huggingface/datasets/pull/404
2020-07-16T17:27:05
2020-07-20T10:12:35
2020-07-20T10:12:34
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
658,325,756
403
return python objects instead of arrays by default
We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists. I fixed it by using to_pydict/to_pylist instead. Fix #387 It was mentioned in https://github.com/huggingface/transformers/issues/5729
closed
https://github.com/huggingface/datasets/pull/403
2020-07-16T15:51:52
2020-07-17T11:37:01
2020-07-17T11:37:00
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
658,001,288
402
Search qa
add SearchQA dataset #336
closed
https://github.com/huggingface/datasets/pull/402
2020-07-16T09:00:10
2020-07-16T14:27:00
2020-07-16T14:26:59
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
657,996,252
401
add web_questions
add Web Question dataset #336 Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken
closed
https://github.com/huggingface/datasets/pull/401
2020-07-16T08:54:59
2020-08-06T06:16:20
2020-08-06T06:16:19
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
657,975,600
400
Web questions
add the WebQuestion dataset #336
closed
https://github.com/huggingface/datasets/pull/400
2020-07-16T08:28:29
2020-07-16T08:50:51
2020-07-16T08:42:54
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
657,841,433
399
Spelling mistake
In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr".
closed
https://github.com/huggingface/datasets/pull/399
2020-07-16T04:37:58
2020-07-16T06:49:48
2020-07-16T06:49:37
{ "login": "BlancRay", "id": 9410067, "type": "User" }
[]
true
[]
657,511,962
398
Add inline links
Add inline links to `Contributing.md`
closed
https://github.com/huggingface/datasets/pull/398
2020-07-15T17:04:04
2020-07-22T10:14:22
2020-07-22T10:14:22
{ "login": "bharatr21", "id": 13381361, "type": "User" }
[]
true
[]
657,510,856
397
Add contiguous sharding
This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing. Usage: ``` nlp.concatenate_datasets([dset.shard(n, i, contiguous=True) for i in range(n)]) ```
closed
https://github.com/huggingface/datasets/pull/397
2020-07-15T17:02:58
2020-07-17T16:59:31
2020-07-17T16:59:31
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
657,477,952
396
Fix memory issue when doing select
We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name. Fix #395
closed
https://github.com/huggingface/datasets/pull/396
2020-07-15T16:15:04
2020-07-16T08:07:32
2020-07-16T08:07:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
657,454,983
395
Memory issue when doing select
As noticed in #389, the following code loads the entire wikipedia in memory. ```python import nlp w = nlp.load_dataset("wikipedia", "20200501.en", split="train") w.select([0]) ``` This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that tries to serialize the function with all the wikipedia data with it. It's not the case with `.map` or `.filter`. However functions that are based on `.select` like `.shuffle`, `.shard`, `.train_test_split`, `.sort` are affected.
closed
https://github.com/huggingface/datasets/issues/395
2020-07-15T15:43:38
2020-07-16T08:07:31
2020-07-16T08:07:31
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
false
[]
657,425,548
394
Remove remaining nested dict
This PR deletes the remaining unnecessary nested dict #378
closed
https://github.com/huggingface/datasets/pull/394
2020-07-15T15:05:52
2020-07-16T07:39:52
2020-07-16T07:39:51
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
657,330,911
393
Fix extracted files directory for the DownloadManager
The cache dir was often cluttered by extracted files because of the download manager. For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to cache_dir/downloads/extracted.
closed
https://github.com/huggingface/datasets/pull/393
2020-07-15T12:59:55
2020-07-17T17:02:16
2020-07-17T17:02:14
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
657,313,738
392
Style change detection
Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents. - There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels are a subset of PAN20's). These would probably make more sense as separate datasets (like wmt is now) - I've converted the integer 0,1 values to a boolean - Using manually downloaded data again. This might be changed at some point following the discussion in https://github.com/huggingface/nlp/pull/349.
closed
https://github.com/huggingface/datasets/pull/392
2020-07-15T12:32:14
2020-07-21T13:18:36
2020-07-17T17:13:23
{ "login": "ghomasHudson", "id": 13795113, "type": "User" }
[]
true
[]
656,956,384
390
Concatenate datasets
I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema. This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in love with the method name, so would love to hear suggestions. Usage: ```python from nlp import Dataset, load_dataset data1, data2 = {"id": [0, 1, 2]}, {"id": [3, 4, 5]} dset1, dset2 = Dataset.from_dict(data1), Dataset.from_dict(data2) dset_concat = Dataset.from_concat([dset1, dset2]) print(dset_concat) # Dataset(schema: {'id': 'int64'}, num_rows: 6) ```
closed
https://github.com/huggingface/datasets/pull/390
2020-07-14T23:24:37
2020-07-22T09:49:58
2020-07-22T09:49:58
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
656,921,768
389
Fix pickling of SplitDict
It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example: ``` wiki = nlp.load_dataset('wikipedia', split='train') def sentencize(examples): ... wiki = wiki.map(sentencize, batched=True) torch.save(wiki, 'sentencized_wiki_dataset.pt') ``` However, upon unpickling the dataset via torch.load(...), this error is raised: ``` ValueError("Cannot add elem. Use .add() instead.") ``` On line [492 of splits.py](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). This is because SplitDict subclasses dict, and pickle treats [dicts specially](https://github.com/huggingface/nlp/blob/master/src/nlp/splits.py#L492). Pickle expects access to `dict.__setitem__`, but this is disallowed by the class. The workaround is to provide an explicit interface for pickle to call when pickling and unpickling, thereby avoiding the use of `__setitem__`. Testing: - Manually pickled and unpickled a modified wikipedia dataset. - Ran `make style` I would be happy to run any other tests, but I couldn't find any in the contributing guidelines.
closed
https://github.com/huggingface/datasets/pull/389
2020-07-14T21:53:39
2020-08-04T14:38:10
2020-08-04T14:38:10
{ "login": "mitchellgordon95", "id": 7490438, "type": "User" }
[]
true
[]
656,707,497
388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not observed on `wmt16` and `wmt18` 2. When trying to download `wmt17 zh-en`, I got the following error: > ConnectionError: Couldn't reach https://storage.googleapis.com/tfdataset-data/downloadataset/uncorpus/UNv1.0.en-zh.tar.gz
closed
https://github.com/huggingface/datasets/issues/388
2020-07-14T15:36:41
2022-10-04T18:01:28
2022-10-04T18:01:28
{ "login": "SamuelCahyawijaya", "id": 2826602, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
656,361,357
387
Conversion through to_pandas output numpy arrays for lists instead of python objects
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [array([ 101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102])], 'token_type_ids': [array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0])], 'attention_mask': [array([1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1])]} >>> type(dataset._data.slice(key, 1).to_pandas().to_dict("list")['input_ids'][0]) <class 'numpy.ndarray'> >>> dataset._data.slice(key, 1).to_pydict() {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting his evidence .'], 'sentence2': ['Referring to him as only " the witness " , Amrozi accused his brother of deliberately distorting his evidence .'], 'label': [1], 'idx': [0], 'input_ids': [[101, 7277, 2180, 5303, 4806, 1117, 1711, 117, 2292, 1119, 1270, 107, 1103, 7737, 107, 117, 1104, 9938, 4267, 12223, 21811, 1117, 2554, 119, 102]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]} ```
closed
https://github.com/huggingface/datasets/issues/387
2020-07-14T06:24:01
2020-07-17T11:37:00
2020-07-17T11:37:00
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
false
[]
655,839,067
386
Update dataset loading and features - Add TREC dataset
This PR: - add a template for a new dataset script - update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is usually a outdated). This makes it in particular easier to iterate when writing a new dataset loading script. - fix a bug in the `ClassLabel` feature and make it more flexible so that its methods `str2int` and `int2str` can also accept list, numpy arrays and PyTorch/TensorFlow tensors. - add the TREC-6 dataset
closed
https://github.com/huggingface/datasets/pull/386
2020-07-13T13:10:18
2020-07-16T08:17:58
2020-07-16T08:17:58
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
655,663,997
385
Remove unnecessary nested dict
This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated: - MLQA - RACE Will be adding more if necessary. #378
closed
https://github.com/huggingface/datasets/pull/385
2020-07-13T08:46:23
2020-07-15T11:27:38
2020-07-15T10:03:53
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
655,291,201
383
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark
Hi, First of all, this library is really cool! Thanks for putting all of this together! This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ): > 1. Why do we need LinCE? >LinCE brings 10 code-switching datasets together for 4 tasks and 4 language pairs with 5 leaderboards in a single evaluation platform. We examined each dataset and fixed major issues on the partitions (or even define official partitions) with a comprehensive stratification method (see our paper for more details). >Besides, we believe that online benchmarks like LinCE bring steady research progress and allow to compare state-of-the-art models at the pace of the progress in NLP. We expect to benefit greatly the code-switching community with this benchmark. The data comes from social media and here's the summary table of tasks per language pair: | Language Pairs | LID | POS | NER | SA | |----------------------------------------|-----|-----|-----|----| | Spanish-English | ✅ | ✅ | ✅ | ✅ | | Hindi-English | ✅ | ✅ | ✅ | | | Modern Standard Arabic-Egyptian Arabic | ✅ | | ✅ | | | Nepali-English | ✅ | | | | The tasks are as follows: * LID: token-level language identification * POS: part-of-speech tagging * NER: named entity recognition * SA: sentiment analysis With the exception of MSA-EA, the rest of the datasets contain token-level LID labels. ## Usage For Spanish-English LID, we can load the data as follows: ``` import nlp data = nlp.load_dataset('./datasets/lince/lince.py', 'lid_spaeng') for split in data: print(data[split]) ``` Here's the output: ``` Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 21030) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 3332) Dataset(schema: {'idx': 'int32', 'tokens': 'list<item: string>', 'lid': 'list<item: string>'}, num_rows: 8289) ``` Here's the list of shortcut names for every dataset available in LinCE: * `lid_spaeng` * `lid_hineng` * `lid_nepeng` * `lid_msaea` * `pos_spaeng` * `pos_hineng` * `ner_spaeng` * `ner_hineng` * `ner_msaea` * `sa_spaeng` All the numbers match with Table 3 in the LinCE [paper](https://www.aclweb.org/anthology/2020.lrec-1.223.pdf). Also, note that the MSA-EA datasets use the Persian script while the other datasets use the Roman script. ## Features Here is how the features look in the case of language identification (LID) tasks: | LID Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | For part-of-speech (POS) tagging: | POS Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `pos` | `list<str>` | List of POS tags (string) of a sentence | For named entity recognition (NER): | NER Feature | Type | Description | |----------------------|---------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `ner` | `list<str>` | List of NER labels (string) of a sentence | **NOTE**: the MSA-EA NER dataset does not contain the `lid` feature. For sentiment analysis (SA): | SA Feature | Type | Description | |---------------------|-------------|-------------------------------------------| | `idx` | `int` | Dataset index of current sentence | | `tokens` | `list<str>` | List of tokens (string) of a sentence | | `lid` | `list<str>` | List of LID labels (string) of a sentence | | `sa` | `str` | Sentiment label (string) of a sentence |
closed
https://github.com/huggingface/datasets/pull/383
2020-07-11T22:35:20
2020-07-16T16:19:46
2020-07-16T16:19:46
{ "login": "gaguilar", "id": 5833357, "type": "User" }
[]
true
[]
655,290,482
382
1080
closed
https://github.com/huggingface/datasets/issues/382
2020-07-11T22:29:07
2020-07-11T22:49:38
2020-07-11T22:49:38
{ "login": "saq194", "id": 60942503, "type": "User" }
[]
false
[]
655,277,119
381
NLp
closed
https://github.com/huggingface/datasets/issues/381
2020-07-11T20:50:14
2020-07-11T20:50:39
2020-07-11T20:50:39
{ "login": "Spartanthor", "id": 68147610, "type": "User" }
[]
false
[]
655,226,316
378
[dataset] Structure of MLQA seems unecessary nested
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python features=nlp.Features( { "context": nlp.Value("string"), "questions": nlp.features.Sequence({"question": nlp.Value("string")}), "answers": nlp.features.Sequence( {"text": nlp.Value("string"), "answer_start": nlp.Value("int32"),} ), "ids": nlp.features.Sequence({"idx": nlp.Value("string")}) ```
closed
https://github.com/huggingface/datasets/issues/378
2020-07-11T15:16:08
2020-07-15T16:17:20
2020-07-15T16:17:20
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
false
[]
655,215,790
377
Iyy!!!
closed
https://github.com/huggingface/datasets/issues/377
2020-07-11T14:11:07
2020-07-11T14:30:51
2020-07-11T14:30:51
{ "login": "ajinomoh", "id": 68154535, "type": "User" }
[]
false
[]
655,047,826
376
to_pandas conversion doesn't always work
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.json"]}, download_mode=nlp.GenerateMode.FORCE_REDOWNLOAD, version="1.0.0", field='data') >>> squad['train'] Dataset(schema: {'title': 'string', 'paragraphs': 'list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>>'}, num_rows: 442) >>> squad['train'][0] Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 589, in __getitem__ format_kwargs=self._format_kwargs, File "/Users/thomwolf/Documents/GitHub/datasets/src/nlp/arrow_dataset.py", line 529, in _getitem outputs = self._unnest(self._data.slice(key, 1).to_pandas().to_dict("list")) File "pyarrow/array.pxi", line 559, in pyarrow.lib._PandasConvertible.to_pandas File "pyarrow/table.pxi", line 1367, in pyarrow.lib.Table._to_pandas File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 766, in table_to_blockmanager blocks = _table_to_blocks(options, table, categories, ext_columns_dtypes) File "/Users/thomwolf/miniconda2/envs/datasets/lib/python3.7/site-packages/pyarrow/pandas_compat.py", line 1101, in _table_to_blocks list(extension_columns.keys())) File "pyarrow/table.pxi", line 881, in pyarrow.lib.table_to_blocks File "pyarrow/error.pxi", line 105, in pyarrow.lib.check_status pyarrow.lib.ArrowNotImplementedError: Not implemented type for Arrow list to pandas: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> ``` cc @lhoestq would we have a way to detect this from the schema maybe? Here is the schema for this pretty complex JSON: ```python >>> squad['train'].schema title: string paragraphs: list<item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string>> child 0, item: struct<qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>>, context: string> child 0, qas: list<item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>>> child 0, item: struct<question: string, id: string, answers: list<item: struct<text: string, answer_start: int64>>, is_impossible: bool, plausible_answers: list<item: struct<text: string, answer_start: int64>>> child 0, question: string child 1, id: string child 2, answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 3, is_impossible: bool child 4, plausible_answers: list<item: struct<text: string, answer_start: int64>> child 0, item: struct<text: string, answer_start: int64> child 0, text: string child 1, answer_start: int64 child 1, context: string ```
closed
https://github.com/huggingface/datasets/issues/376
2020-07-10T21:33:31
2022-10-04T18:05:39
2022-10-04T18:05:39
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
false
[]
655,023,307
375
TypeError when computing bertscore
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most recent call last): File "bert_score_evaluate.py", line 16, in <module> print (bertscore.compute(hyps, refs, lang='en')) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metric.py", line 200, in compute output = self._compute(predictions=predictions, references=references, **metrics_kwargs) File "/home/willywsm/anaconda3/envs/torcher/lib/python3.7/site-packages/nlp/metrics/bertscore/fb176889831bf0ce995ed197edc94b2e9a83f647a869bb8c9477dbb2d04d0f08/bertscore.py", line 105, in _compute hashcode = bert_score.utils.get_hash(model_type, num_layers, idf, rescale_with_baseline) TypeError: get_hash() takes 3 positional arguments but 4 were given ``` It seems like there is something wrong with get_hash() function?
closed
https://github.com/huggingface/datasets/issues/375
2020-07-10T20:37:44
2022-06-01T15:15:59
2022-06-01T15:15:59
{ "login": "willywsm1013", "id": 13269577, "type": "User" }
[]
false
[]
654,895,066
374
Add dataset post processing for faiss indexes
# Post processing of datasets for faiss indexes Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries. ## Implementation proposition - Faiss indexes have to be added to the `nlp.Dataset` object, and therefore it's in a different scope that what are doing the `_split_generators` and `_generate_examples` methods of `nlp.DatasetBuilder`. Therefore I added a new method for post processing of the `nlp.Dataset` object called `_post_process` (name could change) - The role of `_post_process` is to apply dataset transforms (filter/map etc.) or indexing functions (add_faiss_index) to modify/enrich the `nlp.Dataset` object. It is not part of the `download_and_prepare` process (that is focused on arrow files creation) so the post processing is run inside the `as_dataset` method. - `_post_process` can generate new files (cached files from dataset transforms or serialized faiss indexes) and their names are defined by `_post_processing_resources` - as we know what are the post processing resources, we can download them automatically from google storage instead of computing them if they're available (as we do for arrow files) I'd happy to discuss these choices ! ## The `wiki_dpr` index It takes 1h20 and ~7GB of memory to compute. The final index is 1.42GB and takes ~1.5GB of memory. This is pretty cool given that a naive flat index would take 170GB of memory to store the 21M vectors of dim 768. I couldn't use directly the Faiss `index_factory` as I needed to set the metric to inner product. ## Example of usage ```python import nlp dset = nlp.load_dataset( "wiki_dpr", "psgs_w100_with_nq_embeddings", split="train", with_index=True ) print(len(dset), dset.list_indexes()) # (21015300, ['embeddings']) ``` (it also works with the dataset configuration without the embeddings because I added the index file in google storage for this one too) ## Demo You can also check a demo on google colab that shows how to use it with the DPRQuestionEncoder from transformers: https://colab.research.google.com/drive/1FakNU8W5EPMcWff7iP1H6REg3XSS0YLp?usp=sharing
closed
https://github.com/huggingface/datasets/pull/374
2020-07-10T16:25:59
2020-07-13T13:44:03
2020-07-13T13:44:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
654,845,133
373
Segmentation fault when loading local JSON dataset as of #372
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') ``` causes ``` Using custom data configuration default Downloading and preparing dataset json/default (download: Unknown size, generated: Unknown size, total: Unknown size) to /home/XXX/.cache/huggingface/datasets/json/default/0.0.0... 0 tables [00:00, ? tables/s]Segmentation fault (core dumped) ``` where `./datasets/train-v2.0.json` is downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/. This is consistent with other SQuAD-formatted JSON files. When attempting to load the dataset again, I get the following: ``` Using custom data configuration default Traceback (most recent call last): File "dataloader.py", line 6, in <module> 'json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, field='data') File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 382, in download_and_prepare with incomplete_dir(self._cache_dir) as tmp_data_dir: File "/home/XXX/.conda/envs/torch/lib/python3.7/contextlib.py", line 112, in __enter__ return next(self.gen) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 368, in incomplete_dir os.makedirs(tmp_dir) File "/home/XXX/.conda/envs/torch/lib/python3.7/os.py", line 223, in makedirs mkdir(name, mode) FileExistsError: [Errno 17] File exists: '/home/XXX/.cache/huggingface/datasets/json/default/0.0.0.incomplete' ``` (Not sure if you wanted this in the previous issue #369 or not as it was closed.)
closed
https://github.com/huggingface/datasets/issues/373
2020-07-10T15:04:25
2022-10-04T18:05:47
2022-10-04T18:05:47
{ "login": "vegarab", "id": 24683907, "type": "User" }
[]
false
[]
654,774,420
372
Make the json script more flexible
Fix https://github.com/huggingface/nlp/issues/359 Fix https://github.com/huggingface/nlp/issues/369 JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file). In this case, you should indicate using `field=XXX` the name of the field in the JSON structure which contains the records you want to load. The records can be a dict of lists or a list of dicts. E.g. to load the SQuAD dataset JSON (without using the `squad` specific dataset loading script), in which the data rows are in the `data` field of the JSON dict, you can do: ```python from nlp import load_dataset dataset = load_dataset('json', data_files='/PATH/TO/JSON', field='data') ```
closed
https://github.com/huggingface/datasets/pull/372
2020-07-10T13:15:15
2020-07-10T14:52:07
2020-07-10T14:52:06
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
654,668,242
371
Fix cached file path for metrics with different config names
The config name was not taken into account to build the cached file path. It should fix #368
closed
https://github.com/huggingface/datasets/pull/371
2020-07-10T10:02:24
2020-07-10T13:45:22
2020-07-10T13:45:20
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
654,304,193
370
Allow indexing Dataset via np.ndarray
closed
https://github.com/huggingface/datasets/pull/370
2020-07-09T19:43:15
2020-07-10T14:05:44
2020-07-10T14:05:43
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
654,186,890
369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/load.py", line 524, in load_dataset save_infos=save_infos, File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/builder.py", line 719, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False): File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/home/XXX/.conda/envs/torch/lib/python3.7/site-packages/nlp/datasets/json/88c1bc5c68489f7eda549ed05a5a738527c613b3e7a4ee3524d9d233353a949b/json.py", line 53, in _generate_tables file, read_options=self.config.pa_read_options, parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 191, in pyarrow._json.read_json File "pyarrow/error.pxi", line 85, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` I haven't been able to find any reports of this specific pyarrow error here or elsewhere.
closed
https://github.com/huggingface/datasets/issues/369
2020-07-09T16:16:53
2020-12-15T23:07:22
2020-07-10T14:52:06
{ "login": "vegarab", "id": 24683907, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
654,087,251
368
load_metric can't acquire lock anymore
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 101, in __init__ self.filelock.acquire(timeout=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/filelock.py", line 278, in acquire raise Timeout(self._lock_file) filelock.Timeout: The file lock '/home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock' could not be acquired. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "examples_huggingface_nlp.py", line 268, in <module> main() File "examples_huggingface_nlp.py", line 242, in main dataset, metric = get_dataset_metric(glue_task) File "examples_huggingface_nlp.py", line 77, in get_dataset_metric metric = nlp.load_metric('glue', glue_config, experiment_id=1) File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/load.py", line 440, in load_metric **metric_init_kwargs, File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/nlp/metric.py", line 104, in __init__ "Cannot acquire lock, caching file might be used by another process, " ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id' for this run. I0709 15:54:41.008838 139854118430464 filelock.py:318] Lock 139852058030936 released on /home/XXX/.cache/huggingface/metrics/glue/1.0.0/1-glue-0.arrow.lock
closed
https://github.com/huggingface/datasets/issues/368
2020-07-09T14:04:09
2020-07-10T13:45:20
2020-07-10T13:45:20
{ "login": "ydshieh", "id": 2521628, "type": "User" }
[]
false
[]
654,012,984
367
Update Xtreme to add PAWS-X es
This PR adds the `PAWS-X.es` in the Xtreme dataset #362
closed
https://github.com/huggingface/datasets/pull/367
2020-07-09T12:14:37
2020-07-09T12:37:11
2020-07-09T12:37:10
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
653,954,896
366
Add quora dataset
Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs). Implementation Notes: - I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test split but I can't find an easy way to download it. - I've made the questions into a list: ```python { "questions": [ {"id":0, "text": "Is this an example question?"}, {"id":1, "text": "Is this a sample question?"}, ], ... } ``` rather than: ```python { "question1": "Is this an example question?", "question2": "Is this a sample question?" "qid0": 0 "qid1": 1 ... } ``` Not sure if this was the right call. - Can't find a good citation for this dataset
closed
https://github.com/huggingface/datasets/pull/366
2020-07-09T10:34:22
2020-07-13T17:35:21
2020-07-13T17:35:21
{ "login": "ghomasHudson", "id": 13795113, "type": "User" }
[]
true
[]
653,845,964
365
How to augment data ?
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=True) ```
closed
https://github.com/huggingface/datasets/issues/365
2020-07-09T07:52:37
2020-07-10T09:12:07
2020-07-10T08:22:15
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
653,821,597
364
add MS MARCO dataset
This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including: - Passage and Document Retrieval - Keyphrase Extraction - QA and NLG This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pdf/1611.09268.pdf Tests are failing because of the dummy data. I tried to fix it without success. Can you please have a look at it? @patrickvonplaten , @lhoestq
closed
https://github.com/huggingface/datasets/pull/364
2020-07-09T07:11:19
2020-08-06T06:15:49
2020-08-06T06:15:48
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
653,821,172
363
Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets
nlp/features.py: The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datasets/lxmert_pretraining_beta/test_multi_array.py src/nlp/arrow_writer.py I had to add a method for writing batches that include extension array types because despite having a unique class for each multidimensional array shape, pyarrow is unable to write any other "array-like" data class to a batch object unless it is of the type pyarrow.ExtensionType. The problem in this is that when writing multiple batches, the order of the schema and data to be written get mixed up (where the pyarrow datatype in the schema only refers to as ExtensionAray, but each ExtensionArray subclass has a different shape) ... possibly I am missing something here and would be grateful if anyone else could take a look! datasets/lxmert_pretraining_beta/lxmert_pretraining_beta.py & datasets/lxmert_pretraining_beta/to_arrow_data.py: I have begun adding the data from the original LXMERT paper (https://arxiv.org/abs/1908.07490) hosted here: (https://github.com/airsplay/lxmert). The reason I am not pulling from the source of truth for each individual dataset is because it seems that there will also need to be functionality to aggregate multimodal datasets to create a pre-training corpus (:sleepy: ). For now, this is just being used to test and run edge-cases for the MultiArray feature, so ive labeled it as "beta_pretraining"! (still working on the pretraining, just wanted to push out the new functionality sooner than later)
closed
https://github.com/huggingface/datasets/pull/363
2020-07-09T07:10:30
2020-08-24T09:59:35
2020-08-24T09:59:35
{ "login": "eltoto1219", "id": 14030663, "type": "User" }
[]
true
[]
653,766,245
362
[dateset subset missing] xtreme paws-x
I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error It turns out that the subset for Spanish is missing https://github.com/google-research-datasets/paws/tree/master/pawsx
closed
https://github.com/huggingface/datasets/issues/362
2020-07-09T05:04:54
2020-07-09T12:38:42
2020-07-09T12:38:42
{ "login": "cosmeowpawlitan", "id": 50871412, "type": "User" }
[]
false
[]
653,757,376
361
🐛 [Metrics] ROUGE is non-deterministic
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differents run : > ['0.3350', '0.1470', '0.2329'] ['0.3358', '0.1451', '0.2332'] --- Why ROUGE is not deterministic ?
closed
https://github.com/huggingface/datasets/issues/361
2020-07-09T04:39:37
2022-09-09T15:20:55
2020-07-20T23:48:37
{ "login": "astariul", "id": 43774355, "type": "User" }
[]
false
[]
653,687,176
360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from the dataset. However, some dataset transformations are many-to-many. Consider constructing BERT training examples from a dataset of sentences, where you map `["a", "b", "c"] -> ["a[SEP]b", "a[SEP]c", "b[SEP]c", "c[SEP]b", ...]` I propose a more general `ragged_map()` method that takes in a batch of examples of length `N` and return a batch of examples `M`. This is different from the `map(batched=True)` method, which takes examples of length `N` and returns a batch of length `N`, processing individual examples in parallel. I don't have a clear vision of how this would be implemented efficiently and lazily, but would love to hear the community's feedback on this. My specific use case is creating an end-to-end ELECTRA data pipeline. I would like to take the raw WikiText data and generate training examples from this using the `ragged_map()` method, then export to TFRecords and train quickly. This would be a reproducible pipeline with no bash scripts. Currently I'm relying on scripts like https://github.com/google-research/electra/blob/master/build_pretraining_dataset.py, which are less general.
closed
https://github.com/huggingface/datasets/issues/360
2020-07-09T01:04:43
2020-07-09T19:31:51
2020-07-09T19:31:51
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
false
[]
653,656,279
359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <module> 55 from nlp import load_dataset 56 ---> 57 ds = load_dataset("../text2struct/model/dataset_builder.py", data_files=rel_datafiles) 58 59 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/load.py in load_dataset(path, name, version, data_dir, data_files, split, cache_dir, download_config, download_mode, ignore_verifications, save_infos, **config_kwargs) 522 download_mode=download_mode, 523 ignore_verifications=ignore_verifications, --> 524 save_infos=save_infos, 525 ) 526 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, save_infos, try_from_hf_gcs, dl_manager, **download_and_prepare_kwargs) 430 verify_infos = not save_infos and not ignore_verifications 431 self._download_and_prepare( --> 432 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 433 ) 434 # Sync info ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 481 try: 482 # Prepare split will record examples associated to the split --> 483 self._prepare_split(split_generator, **prepare_split_kwargs) 484 except OSError: 485 raise OSError("Cannot find data file. " + (self.manual_download_instructions or "")) ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in _prepare_split(self, split_generator) 736 schema_dict[field.name] = Value(str(field.type)) 737 --> 738 parse_schema(writer.schema, features) 739 self.info.features = Features(features) 740 ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/builder.py in parse_schema(schema, schema_dict) 734 parse_schema(field.type.value_type, schema_dict[field.name]) 735 else: --> 736 schema_dict[field.name] = Value(str(field.type)) 737 738 parse_schema(writer.schema, features) <string> in __init__(self, dtype, id, _type) ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in __post_init__(self) 55 56 def __post_init__(self): ---> 57 self.pa_type = string_to_arrow(self.dtype) 58 59 def __call__(self): ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/nlp/features.py in string_to_arrow(type_str) 32 if str(type_str + "_") not in pa.__dict__: 33 raise ValueError( ---> 34 f"Neither {type_str} nor {type_str + '_'} seems to be a pyarrow data type. " 35 f"Please make sure to use a correct data type, see: " 36 f"https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions" ValueError: Neither list<item: string> nor list<item: string>_ seems to be a pyarrow data type. Please make sure to use a correct data type, see: https://arrow.apache.org/docs/python/api/datatypes.html#factory-functions ``` If I create the dataset imperatively, using a pyarrow table, the dataset is created correctly. If I override the `_prepare_split` method to avoid calling the validate schema, the dataset can load as well.
closed
https://github.com/huggingface/datasets/issues/359
2020-07-08T23:24:05
2020-07-10T14:52:06
2020-07-10T14:52:06
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[]
false
[]
653,645,121
358
Starting to add some real doc
Adding a lot of documentation for: - load a dataset - explore the dataset object - process data with the dataset - add a new dataset script - share a dataset script - full package reference This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html Also: - fix a bug in `train_test_split` - update the `csv` script - add a verbose argument to the dataset processing methods Still missing: - doc for the metrics - how to directly upload a community provided dataset with the CLI - clean up more docstrings - add the `features` argument to `load_dataset` (should be another PR)
closed
https://github.com/huggingface/datasets/pull/358
2020-07-08T22:53:03
2020-07-14T09:58:17
2020-07-14T09:58:15
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
true
[]
653,642,292
357
Add hashes to cnn_dailymail
The URL hashes are helpful for comparing results from other sources.
closed
https://github.com/huggingface/datasets/pull/357
2020-07-08T22:45:21
2020-07-13T14:16:38
2020-07-13T14:16:38
{ "login": "jbragg", "id": 2238344, "type": "User" }
[]
true
[]
653,537,388
356
Add text dataset
Usage: ```python from nlp import load_dataset dset = load_dataset("text", data_files="/path/to/file.txt")["train"] ``` I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes ```bash RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_text ``` but I would like a second set of eyes to ensure I did it right.
closed
https://github.com/huggingface/datasets/pull/356
2020-07-08T19:21:53
2020-07-10T14:19:03
2020-07-10T14:19:03
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
653,451,013
355
can't load SNLI dataset
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` File "/content/nlp/src/nlp/builder.py", line 432, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/content/nlp/src/nlp/builder.py", line 466, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/content/nlp/src/nlp/datasets/snli/e417f6f2e16254938d977a17ed32f3998f5b23e4fcab0f6eb1d28784f23ea60d/snli.py", line 76, in _split_generators dl_dir = dl_manager.download_and_extract(_DATA_URL) File "/content/nlp/src/nlp/utils/download_manager.py", line 217, in download_and_extract return self.extract(self.download(url_or_urls)) File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in download lambda url: cached_path(url, download_config=self._download_config,), url_or_urls, File "/content/nlp/src/nlp/utils/py_utils.py", line 190, in map_nested return function(data_struct) File "/content/nlp/src/nlp/utils/download_manager.py", line 156, in <lambda> lambda url: cached_path(url, download_config=self._download_config,), url_or_urls, File "/content/nlp/src/nlp/utils/file_utils.py", line 198, in cached_path local_files_only=download_config.local_files_only, File "/content/nlp/src/nlp/utils/file_utils.py", line 356, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://nlp.stanford.edu/projects/snli/snli_1.0.zip ```
closed
https://github.com/huggingface/datasets/issues/355
2020-07-08T16:54:14
2020-07-18T05:15:57
2020-07-15T07:59:01
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[]
false
[]
653,357,617
354
More faiss control
Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples
closed
https://github.com/huggingface/datasets/pull/354
2020-07-08T14:45:20
2020-07-09T09:54:54
2020-07-09T09:54:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
653,250,611
353
[Dataset requests] New datasets for Text Classification
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - #386 - [x] Yelp-5 - #1315 - [x] Movie review (Movie Review (MR) dataset [156]) **[done (same as rotten_tomatoes)]** - [x] SST (Stanford Sentiment Treebank) **[include in glue]** - #1934 - [ ] Multi-Perspective Question Answering (MPQA) dataset **[require authentication (indeed manual download)]** - [x] Amazon. This is a popular corpus of product reviews collected from the Amazon website [159]. It contains labels for both binary classification and multi-class (5-class) classification - #791 - #1389 - [x] 20 Newsgroups. The 20 Newsgroups dataset **[done]** - #410 - [x] Sogou News dataset **[done]** - #450 - [x] Reuters news. The Reuters-21578 dataset [165] **[done]** - #471 - [x] DBpedia. The DBpedia dataset [170] - #1116 - [ ] Ohsumed. The Ohsumed collection [171] is a subset of the MEDLINE database - [ ] EUR-Lex. The EUR-Lex dataset - [x] WOS. The Web Of Science (WOS) dataset **[done]** - #424 - [ ] PubMed. PubMed [173] - [x] TREC-QA: TREC-6 + TREC-50 - See above: TREC-6 dataset - [x] Quora. The Quora dataset [180] - #366 All these datasets are cited in https://arxiv.org/abs/2004.03705
open
https://github.com/huggingface/datasets/issues/353
2020-07-08T12:17:58
2025-04-05T09:28:15
null
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "help wanted", "color": "008672" }, { "name": "dataset request", "color": "e99695" } ]
false
[]
653,128,883
352
🐛[BugFix]fix seqeval
Fix seqeval process labels such as 'B', 'B-ARGM-LOC'
closed
https://github.com/huggingface/datasets/pull/352
2020-07-08T09:12:12
2020-07-16T08:26:46
2020-07-16T08:26:46
{ "login": "AlongWY", "id": 20281571, "type": "User" }
[]
true
[]
652,424,048
351
add pandas dataset
Create a dataset from serialized pandas dataframes. Usage: ```python from nlp import load_dataset dset = load_dataset("pandas", data_files="df.pkl")["train"] ```
closed
https://github.com/huggingface/datasets/pull/351
2020-07-07T15:38:07
2020-07-08T14:15:16
2020-07-08T14:15:15
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
652,398,691
350
add from_pandas and from_dict
I added two new methods to the `Dataset` class: - `from_pandas()` to create a dataset from a pandas dataframe - `from_dict()` to create a dataset from a dictionary (keys = columns) It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so. It is also possible to specify the features types via `features=...` if there are ambiguities (null/nan values), otherwise the arrow schema is infered from the data automatically by pyarrow. One question that I have right now: + Should we also add a `save()` method that would write the dataset on the disk ? Right now if we create a `Dataset` using those two new methods, the data are kept in RAM. Then to reload it we can call the `from_file()` method.
closed
https://github.com/huggingface/datasets/pull/350
2020-07-07T15:03:53
2020-07-08T14:14:33
2020-07-08T14:14:32
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
652,231,571
349
Hyperpartisan news detection
Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display. Implementation notes: - As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before use. I've used the manual download stuff for this, although the dataset is provided under a Creative Commons Attribution 4.0 International License, so we could host a version if we wanted to? - The 'bias' attribute doesn't exist for the 'byarticle' configuration. I've added an empty string to the class labels to deal with this. Is there a more standard value for empty data? - Should we always subclass `nlp.BuilderConfig`?
closed
https://github.com/huggingface/datasets/pull/349
2020-07-07T11:06:37
2020-07-07T20:47:27
2020-07-07T14:57:11
{ "login": "ghomasHudson", "id": 13795113, "type": "User" }
[]
true
[]
652,158,308
348
Add OSCAR dataset
I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it 😅 Thanks!
closed
https://github.com/huggingface/datasets/pull/348
2020-07-07T09:22:07
2021-05-03T22:07:08
2021-02-09T10:19:19
{ "login": "pjox", "id": 635220, "type": "User" }
[]
true
[]
652,106,567
347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I guess the error was triggered by the code " module = importlib.import_module(module_path)" at line 57 in the source code: nlp/src/nlp/load.py / (https://github.com/huggingface/nlp/blob/911d5596f9b500e39af8642fe3d1b891758999c7/src/nlp/load.py#L51) Any ideas? p.s. tried the same code on colab, that runs perfectly
closed
https://github.com/huggingface/datasets/issues/347
2020-07-07T08:14:23
2020-09-07T14:51:45
2020-09-07T14:51:45
{ "login": "cosmeowpawlitan", "id": 50871412, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
652,044,151
346
Add emotion dataset
Hello 🤗 team! I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/merged_training.pkl)). With the current implementation, running ```bash python nlp-cli test datasets/emotion --save_infos --all_configs ``` throws a `_pickle.UnpicklingError: invalid load key, '<'.` error (full stack trace below). The strange thing is that the path to the file does not carry the `.pkl` extension and instead appears to be some md5 hash (see the `FILE PATH` print statement in the stack trace). Note: I have checked that the `merged_training.pkl` file is not corrupted when I download it with `wget`. Any pointers on what I'm doing wrong would be greatly appreciated! **Stack trace** ``` INFO:nlp.load:Checking datasets/emotion/emotion.py for additional imports. INFO:filelock:Lock 140330435928512 acquired on datasets/emotion/emotion.py.lock INFO:nlp.load:Found main folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion INFO:nlp.load:Creating specific version folder for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b INFO:nlp.load:Copying script file from datasets/emotion/emotion.py to /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py INFO:nlp.load:Couldn't find dataset infos file at datasets/emotion/dataset_infos.json INFO:nlp.load:Creating metadata file for dataset datasets/emotion/emotion.py at /Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.json INFO:filelock:Lock 140330435928512 released on datasets/emotion/emotion.py.lock INFO:nlp.builder:Generating dataset emotion (/Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0) INFO:nlp.builder:Dataset not on Hf google storage. Downloading and preparing it from source Downloading and preparing dataset emotion/emotion (download: Unknown size, generated: Unknown size, total: Unknown size) to /Users/lewtun/.cache/huggingface/datasets/emotion/emotion/1.0.0... INFO:nlp.builder:Generating split train 0 examples [00:00, ? examples/s]FILE PATH /Users/lewtun/.cache/huggingface/datasets/3615dcb52b7ba052ef63e1571894c4b67e8e12a6ab1ef2f756ec3c380bf48490 Traceback (most recent call last): File "nlp-cli", line 37, in <module> service.run() File "/Users/lewtun/git/nlp/src/nlp/commands/test.py", line 83, in run builder.download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 483, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/Users/lewtun/git/nlp/src/nlp/builder.py", line 664, in _prepare_split for key, record in utils.tqdm(generator, unit=" examples", total=split_info.num_examples, leave=False): File "/Users/lewtun/miniconda3/envs/nlp/lib/python3.8/site-packages/tqdm/std.py", line 1129, in __iter__ for obj in iterable: File "/Users/lewtun/git/nlp/src/nlp/datasets/emotion/59666994754d1b369228a749b695e377643d141fa98c6972be00407659788c7b/emotion.py", line 87, in _generate_examples data = pickle.load(f) _pickle.UnpicklingError: invalid load key, '<'. ```
closed
https://github.com/huggingface/datasets/pull/346
2020-07-07T06:35:41
2022-05-30T15:16:44
2020-07-13T14:39:38
{ "login": "lewtun", "id": 26859204, "type": "User" }
[]
true
[]
651,761,201
345
Supporting documents in ELI5
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to facebook, the entire document collection is quite large. However, it would still be helpful to at least include a subset of the supporting documents i.e., having some data is better than having a block of cheese, in my case at least. If you choose not to include them, it would be helpful to have documentation mentioning this specifically. It is especially confusing because the hf nlp ELI5 dataset has the key `'document'` but there are no documents to be found :(
closed
https://github.com/huggingface/datasets/issues/345
2020-07-06T19:14:13
2020-10-27T15:38:45
2020-10-27T15:38:45
{ "login": "saverymax", "id": 29262273, "type": "User" }
[]
false
[]
651,495,246
344
Search qa
This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name: - raw_jeopardy: raw data - train_test_val: which is the splitted version #336
closed
https://github.com/huggingface/datasets/pull/344
2020-07-06T12:23:16
2020-07-16T08:58:16
2020-07-16T08:58:16
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
651,419,630
343
Fix nested tensorflow format
In #339 and #337 we are thinking about adding a way to export datasets to tfrecords. However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`. I also added tests on the `set_format` function.
closed
https://github.com/huggingface/datasets/pull/343
2020-07-06T10:13:45
2020-07-06T13:11:52
2020-07-06T13:11:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
651,333,194
342
Features should be updated when `map()` changes schema
`dataset.map()` can change the schema and column names. We should update the features in this case (with what is possible to infer).
closed
https://github.com/huggingface/datasets/issues/342
2020-07-06T08:03:23
2020-07-23T10:15:16
2020-07-23T10:15:16
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[]
false
[]
650,611,969
341
add fever dataset
This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf). #336
closed
https://github.com/huggingface/datasets/pull/341
2020-07-03T13:53:07
2020-07-06T13:03:48
2020-07-06T13:03:47
{ "login": "mariamabarham", "id": 38249783, "type": "User" }
[]
true
[]
650,533,920
340
Update cfq.py
Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions.
closed
https://github.com/huggingface/datasets/pull/340
2020-07-03T11:23:19
2020-07-03T12:33:50
2020-07-03T12:33:50
{ "login": "brainshawn", "id": 4437290, "type": "User" }
[]
true
[]
650,156,468
339
Add dataset.export() to TFRecords
Fixes https://github.com/huggingface/nlp/issues/337 Some design decisions: - Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitting. - Use `from_generator()` instead of `from_tensor_slices()` to address the memory issues discussed in https://github.com/huggingface/nlp/issues/315 and https://github.com/huggingface/nlp/issues/193. - Performs introspection using the values from `dataset.set_format()` to identify the TF datatypes. Currently it supports string, float, and int. If this should be extended for other datatypes, let me know. - There are quite a few helper functions required within the `export()` method. If these are better placed in a utils file somewhere, let me know. Also, I noticed that ```python dataset = dataset.select(indices) dataset.set_format("tensorflow") # dataset._format_type is "tensorflow" ``` gives a different output than ```python dataset.set_format("tensorflow") dataset = dataset.select(indices) # dataset._format_type is None ``` The latter loses the format of its parent dataset. Is there interest in making `set_format` a functional method that returns itself (can be chained), and that derived datasets maintain the format of their parent?
closed
https://github.com/huggingface/datasets/pull/339
2020-07-02T19:26:27
2020-07-22T09:16:12
2020-07-22T09:16:12
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
650,057,253
338
Run `make style`
These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier.
closed
https://github.com/huggingface/datasets/pull/338
2020-07-02T16:19:47
2020-07-02T18:03:10
2020-07-02T18:03:10
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
650,035,887
337
[Feature request] Export Arrow dataset to TFRecords
The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API: ```python # use these existing methods ds = load_dataset("wikitext", "wikitext-2-raw-v1", split="train") ds = ds.map(lambda ex: tokenizer(ex)) ds.set_format("tensorflow", columns=["input_ids", "token_type_ids", "attention_mask"]) # then add this method ds.export(folder="/my/tfrecords", prefix="myrecord", num_shards=8, format="tfrecord") ``` which would create files like so: ```bash /my/tfrecords/myrecord_1.tfrecord /my/tfrecords/myrecord_2.tfrecord ... ``` I would be happy to contribute this method. We could use a similar approach for PyTorch. Thoughts?
closed
https://github.com/huggingface/datasets/issues/337
2020-07-02T15:47:12
2020-07-22T09:16:12
2020-07-22T09:16:12
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
false
[]
649,914,203
336
[Dataset requests] New datasets for Open Question Answering
We are still a few datasets missing for Open-Question Answering which is currently a field in strong development. Namely, it would be really nice to add: - WebQuestions (Berant et al., 2013) [done] - CuratedTrec (Baudis et al. 2015) [not open-source] - MS-MARCO (NGuyen et al. 2016) [done] - SearchQA (Dunn et al. 2017) [done] - FEVER (Thorne et al. 2018) - [ done] All these datasets are cited in http://arxiv.org/abs/2005.11401
closed
https://github.com/huggingface/datasets/issues/336
2020-07-02T13:03:03
2020-07-16T09:04:22
2020-07-16T09:04:22
{ "login": "thomwolf", "id": 7353373, "type": "User" }
[ { "name": "help wanted", "color": "008672" }, { "name": "dataset request", "color": "e99695" } ]
false
[]
649,765,179
335
BioMRC Dataset presented in BioNLP 2020 ACL Workshop
closed
https://github.com/huggingface/datasets/pull/335
2020-07-02T09:03:41
2020-07-15T08:02:07
2020-07-15T08:02:07
{ "login": "PetrosStav", "id": 15162021, "type": "User" }
[]
true
[]
649,661,791
334
Add dataset.shard() method
Fixes https://github.com/huggingface/nlp/issues/312
closed
https://github.com/huggingface/datasets/pull/334
2020-07-02T06:05:19
2020-07-06T12:35:36
2020-07-06T12:35:36
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
true
[]
649,236,516
333
fix variable name typo
closed
https://github.com/huggingface/datasets/pull/333
2020-07-01T19:13:50
2020-07-24T15:43:31
2020-07-24T08:32:16
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
true
[]
649,140,135
332
Add wiki_dpr
Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder. Note on the implementation: - There are two configs: with and without the embeddings (73GB vs 14GB) - I used a non-fixed-size sequence of floats to describe the feature format of the embeddings. I wanted to use fixed-size sequences but I had issues with reading the arrow file afterwards (for example `dataset[0]` was crashing) - I added the case for lists of urls as input of the download_manager
closed
https://github.com/huggingface/datasets/pull/332
2020-07-01T17:12:00
2020-07-06T12:21:17
2020-07-06T12:21:16
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
648,533,199
331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/p/qdata/jm8wx/datasets/nlp/src/nlp/load.py", line 520, in load_dataset builder_instance.download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 431, in download_and_prepare self._download_and_prepare( File "/p/qdata/jm8wx/datasets/nlp/src/nlp/builder.py", line 488, in _download_and_prepare verify_splits(self.info.splits, split_dict) File "/p/qdata/jm8wx/datasets/nlp/src/nlp/utils/info_utils.py", line 70, in verify_splits raise NonMatchingSplitsSizesError(str(bad_splits)) nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='test', num_bytes=49424491, num_examples=11490, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='test', num_bytes=48931393, num_examples=11379, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='train', num_bytes=1249178681, num_examples=287113, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='train', num_bytes=1240618482, num_examples=285161, dataset_name='cnn_dailymail')}, {'expected': SplitInfo(name='validation', num_bytes=57149241, num_examples=13368, dataset_name='cnn_dailymail'), 'recorded': SplitInfo(name='validation', num_bytes=56637485, num_examples=13255, dataset_name='cnn_dailymail')}] ```
closed
https://github.com/huggingface/datasets/issues/331
2020-06-30T22:21:33
2020-07-09T13:03:40
2020-07-09T13:03:40
{ "login": "jxmorris12", "id": 13238952, "type": "User" }
[ { "name": "dataset bug", "color": "2edb81" } ]
false
[]
648,525,720
330
Doc red
Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes: - There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to reflect this. - As well as the relation id, the full relation name is mapped from `rel_info.json` - I renamed the 'h', 'r', 't' keys to 'head', 'relation' and 'tail' to make them more readable. - Used the fix from #319 to allow nested sequences of dicts.
closed
https://github.com/huggingface/datasets/pull/330
2020-06-30T22:05:31
2020-07-06T12:10:39
2020-07-05T12:27:29
{ "login": "ghomasHudson", "id": 13795113, "type": "User" }
[]
true
[]
648,446,979
329
[Bug] FileLock dependency incompatible with filesystem
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like this: ```bash /fsx ----downloads ----94be...73.lock ----wikitext ----wikitext-2-raw ----wikitext-2-raw-1.0.0.incomplete ``` It appears that on this filesystem, the FileLock object is forever stuck in its "acquire" stage. I have verified that the issue lies specifically with the `filelock` dependency: ```python open("/fsx/hello.txt").write("hello") # succeeds from filelock import FileLock with FileLock("/fsx/hello.lock"): open("/fsx/hello.txt").write("hello") # hangs indefinitely ``` Has anyone else run into this issue? I'd raise it directly on the FileLock repo, but that project appears abandoned with the last update over a year ago. Or if there's a solution that would remove the FileLock dependency from the project, I would appreciate that.
closed
https://github.com/huggingface/datasets/issues/329
2020-06-30T19:45:31
2024-12-26T15:13:39
2020-06-30T21:33:06
{ "login": "jarednielsen", "id": 4564897, "type": "User" }
[]
false
[]
648,326,841
328
Fork dataset
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and json with Entity and Relations annotations and creates 2 datasets for training a NER and Relations prediction heads. Is there some good way to "fork" dataset- EG 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 -> DatasetREL or 1. text + json -> Dataset1 1. Dataset1 -> DatasetNER 1. Dataset1 + DatasetNER -> DatasetREL
closed
https://github.com/huggingface/datasets/issues/328
2020-06-30T16:42:53
2020-07-06T21:43:59
2020-07-06T21:43:59
{ "login": "timothyjlaurent", "id": 2000204, "type": "User" }
[]
false
[]
648,312,858
327
set seed for suffling tests
Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)`
closed
https://github.com/huggingface/datasets/pull/327
2020-06-30T16:21:34
2020-07-02T08:34:05
2020-07-02T08:34:04
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
648,126,103
326
Large dataset in Squad2-format
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677.732 - Answers: 6.742.406 - unanswerable: 377.398 It is already cleaned <pre><code> train_data = [ { 'context': "this is the context", 'qas': [ { 'id': "00002", 'is_impossible': False, 'question': "whats is this", 'answers': [ { 'text': "answer", 'answer_start': 0 } ] }, { 'id': "00003", 'is_impossible': False, 'question': "question2", 'answers': [ { 'text': "answer2", 'answer_start': 1 } ] } ] } ] </code></pre> Cause it is growing every day we are thinking about an structure like this: We host an Json file, containing all the download links and the script can load it dynamically. At the moment it is around ~20GB Any advice how to handle this, or an ready to use template ?
closed
https://github.com/huggingface/datasets/issues/326
2020-06-30T12:18:59
2020-07-09T09:01:50
2020-07-09T09:01:50
{ "login": "flozi00", "id": 47894090, "type": "User" }
[]
false
[]